Hello, I tried pupil_remote_control.m and got the error message that req_socket is undefined How can I fix it?
@user-1221c7 The req_socket needs to be setup first before you can run the program
change the socket variable to req_socket
so that socket will not exist anymore
run the program once
you may or may not get another error that socket is undefined
change req_socket back to socket
you will be fine
you will have to repeat this every time the req_socket variable is erased
thanks
Hello!
I was looking how I could have eye tracking in a VR app I wanna develop and I found this channel via GitHub
I am not a developer though, I am a researcher and I 'd like to know if I can use core for a mobile VR app (android) through Unity. Any idea? Thanks
Thank you guys for fixing the issue with the h.264 compression. It seems to work now!
@user-f9fccd Pupil labs trackers (and most trackers) require special IR cameras and illuminators.
someone has already used the data obtained in the eye tracking for generation of graphs such as: accuracy, time, error rate?
Hey guys, can this be used to replace a mouse?
What version of Unity has everyone been using with the pupil labs trackers? My group and I have been using 2018.1.1f1. I ask this because we have gotten the scene to appear on the monitor, however we have not been able to see the scene through the actual Vive headset. Can anyone help with this issue?
@mpk Does the h.264 compression have a bad influance to the surface tracking? I got the fealing it's not working properly with compression.
@papr Hello, were you able to get around to using gaze subscription in Matlab?
I'm just wonderinmg if you are aware about the bug which causes pupil player to crash when surface tracking is activated? When surface tracking is activated and you start pupil player with a recording, the surface tracking plugin is crashing directly. You can reactivate it by disabling and enabling again. However, when there is no pupil data, the surface tracking will crash immediately again.
Hi guys, I am trying to run Pupil Mobile on my Android phone. My host machine is Windows. Both are on the same local wifi network and usb debug is enabled on android. Whenever I try to run Pupil Mobile with the tracker connected to the phone, then I run Pupil Capture this error appears. Any idea where can I start looking?
Is it possible to do the eye tracking using glasses of degree?
@user-525392 I run into this problem often enough. Give me a few minutes to give you a good answer.
@user-525392 So the short answer on how to begin solving this is to delete the pupil_capture_settings folder. Once you delete this it should allow you to open Pupil Capture (PC). My guess is that the first time you opened PC on your windows computer you had the Pupil Remote Pluggin on and your computer was currently connected to the internet. With this pluggin running and having it toggled "use primarry network interface" Pupil Labs is going to assume that when running Pupil Mobile this is the network that it should be looking for, not your local wifi which probably isn't set on your computer as the primary port. Long term solutions: Set up your local wifi as the computers primary port or as a safety/sanity check make sure your local wifi is the only connected network to your computer when opening PC. And Untoggling the Pupil Remote pluggin won't force you to use your primary network port if you still want you computer connected to multiple networks.
@user-bcaa09 would you mind raising an issue on github.com/pupil-labs/pupil regarding this error? We have not seen thsi before.
@user-c351d6 @papr can you check if we can recreate this issue (surface tracker crashes when there is no gaze /pupil ? data).
@user-c351d6 I have not seen degradation of surface tracking due to h264 compression, but it could certainly be possible. I dont know what the best way to solve this would be though.
@user-c351d6 actually playing with the compression rate could be a way to address this issue!
@mpk @papr It seems like the surface tracker plug-in crashes when there is either no square_marker_cache or no surface_definition and it definitly has also somethinfg to do with the availability of pupil data. I could create an issue or contribute to an issue when you reopen one.
@mpk I did some tests, there is a decrease in surface tracking performance, especially when there are fast movements. Marker in compressed files can be tracked worse when moving than in uncompressed videos. We are possibly moving to a wifi connection to have uncompressed files. However, this needs probably a fast wifi with a high data rate. Do you have some experiences with wifi routers and do you maybe can recommand a reliable one?
@user-c351d6 offline detection is not an option for you?
@papr I don't think the issue is online/offline rather what h264 does to the video.
Mjpeg on device is no option because 4gb are quickly reached in filesize
@mpk Yes, around 10 minutes. And we also have the issue that we need to provide data around 30 minutes after the experiment has finished.
It's a bit hard to believe the mjpeg makes 4gb files in under 10mins. Let's me double check that.
In this context I also recognized that offline detection is much faster on mac than on windows even though the windows computer I used was actually faster.
Hi, Does anybody know how to create a Gaze Plot similar to this one:
Taken by Jeff Pelz @ RIT using a DSLR with IR filter removed. Subsequent photos led us to believe that the Pupil Labs illuminators are actually more diffuse than this image would have you believe.
This image looks great!
...but also creepy, right? Our intention was to test if we could reposition the lights more effectively, especially after adding the extenders purchased from Shapeways. Some students are still looking at the many images we took. I'll let you know if we develop any firm opinions on the matter.
It looks like a diving helmet 😃
When we held a solid color matte surface at the tangent to the camera, the illumination was fairly uniform. This suggests that your illuminators are quite diffuse, and changing the angle will benefit us little.
@papr I've just shared the google drive folder with your team.
@papr I've been meaning to suggest...you should just create a saccade detector, not a fixation detector. Your fixation detector finds when the eye is still, which will only happen when the object is stable in head centered coordinates.
...a fixation, however, is when gaze is still upon something in world centered coordinates. When using a mobile tracker, I would define that just as "not in saccade."
One example of the difference in the two approaches is that my broader definition would include VOR and pursuit as "fixation."
...or, perhaps it's more accurate to describe them as "inter-saccadic intervals."
Relaly depends on the use case . Your style fixations are good for user interaction (e.g. gaze mediated button selection)
...but if the goal is to identify when the person is foveating something, then the ISI is more appropriate.
I'm going to send you some code.
We will probably be adopting this though: https://www.nature.com/articles/s41598-017-17983-x.pdf
Thank you for your code though! I will have a look at it. 🙂
@user-8779ef thats a very intriguing idea. We would love to try this out!
@papr Yeah, I've been loving Otto Lappi's stuff. My students read this one, but I'm behind. Keep in mind that he made that for head-fixed, so I imagine that it will omit VOR/pursuit. Is that a problem? Again, depends on the application... but you guys should be clear about how you define fixation in the instructions for your classifier. You're defining a fixation as when the eye is not rotating. My preferred definition, and the one that I think it most intuitive, is when gaze is stable on something that is fixed in the world frame. For example, staring at an object lying on the ground while walking up to it is still a fixation. Your algorithm would miss that.
...or incorrectly classify it as pursuit, if its using algorithms design for the head-fixed context.
I understand. The current algorithm is far from optimal. (independently of the head-fixed assumption)
Yeah, but not bad. We're using it - we are just careful to realize what it's doing. Others users won't be so aware unless you spell it out for them.
We're also doing some stuff for Google - trying to classify eye+head movements from the velocity signal using IMU + a Pupil Labs tracker.
IMU + stereo RGB for head pose estimation.
This will produce a database. We will share.
Rakshit and his creation!
My collegue is also working on head-pose estimation based on marker detection. This does not require stereo cameras but markers in the camera's field of view.
Rakshit and his matlab interface for hand labelling training data!
Hardware requirement vs additional visual stimuli trade-off
"Others users won't be so aware unless you spell it out for them." - The biggest reason why we have to improve our documentation.
Indeed.
Ok, this is fun, but responsibilities call. Chat later, of course.
👋
Hello, I have some doubts about processing the eyetracking data. 1. Is there a way to define areas of interest without using the markers when doing the recordings?
Which is the best way to create heatmaps for videos?
does anyone have a good suggestion for best values for pupil_size_min and pupil_size_max? It looks like pupil_size_min can be anywhere from 1-250, and max anywhere from 50-400. That leaves a lot of room for different variable. If somebody has any suggestion of values for that that has worked well, I'd love to hear. We've been struggling with getting a lot of bad readings and are trying to get our confidence as high as we can.
@user-988d86 These are pixel values, so this depends entirely on camera positioning.
...because the size of the pupil in the image (measured in pixels) will vary with the distance of the camera from the eye. Try using the "algorithm view" to find an acceptable range.
We're getting ready to get a Pupil Mobile bundle with the binocular high speed recording, but I'm confused about audio recording. The docs say that there is a sensor area on the Android app for sensors such as audio or IMU. But 1) is there a default sensor input that already works, and 2) how is the sound recorded? Is the idea that you attach a microphone to the glasses frame and plug it into the sound input of the Android device? Is anyone currently doing this who would have hardware recommendations?
I would love to hear from anyone who is using the Pupil Mobile setup, particularly anyone who has or is interested in helping to develop a system for coding fixations into AOIs (perhaps along the lines of what SMI did with their ETG system), or something more automated.
Hello, I have an additional question regarding the frame rate which I will be happy for some help with. We are working on syncing the 2D binocular eye tracker data and motion capture, and want to decide with which frame rate to record with the motion capture/interpolate the data to have similar frame rates. We are recording with 200 hertz for the two eye cameras , and 120 hertz for the world view camera. In the data exported from pupil player there are 400 (plus minus 2-3) timestamps with norm gaze positions. Will it be correct to interpolate the data based on the timestamps for example to 350 hertz (so that we don't up sample), and record with 350 hertz for the motion capture. Or should we use the individual timestamps for both eye camera separately based on the timestamps in the "base_data"? If the latter is better how and on which timestamp do I look, as both eyes have different timestamps in each row of the "base_data"? Thank you for your help!
Hey guys, any undocumented ways to adjust led power? We believe ir luminance may be too low using extenders.
Hello, I was wondering if we can find the number of fixations on an AOI during a certain time period during the data collection? Thank you.
Does focus matter at all for the gaze tracking algorithms? I normally wear glasses, but I cannot wear them and the Pupil hardware at the same time. Would I need to wear contact lenses to properly calibrate and use the eye tracking, or could I forgo the contacts and look around a slightly blurry room and still get the same accuracy?
@user-2686f2 depends on your vision. You can do eye tracking without glasses as well. I m guessing accuracy might take a hit.
@user-8779ef at 192x192 or 400x400 px ? We find the lower resolution to yield better results and to require less liiumination.
@NahalNrz#1253 I would do as use suggest, use the gaze data and interpolate.
Hey guys, first time with the Pupil. I just download Pupil Capture for MacOS
Connected the Pupil and launched the app, but there is no feed. Is there any "On" button or "Start" ?
Or should the feed start directly? What/How can I troubleshoot?
(very excited to get started with this device :))
@user-b08428 feeds should start automatically. Can you ensure the USB is plugged in fully?
Sometimes you need to push the USBC end in a bit (slightly more force than you may be accustomed to) to ensure that it's firmly connected
BTW @user-b08428 what macOS version are you using?
Is the surface information actually recorded during a Pupil Capture session? I can only seem to get an export of the surface information if I load a session into Pupil Player and then use the Offline Surface Tracker. This seems to yield the same information, regardless of whether or not I used the Surface Tracker plugin during the Capture session.
@user-2686f2 that is correct. The online surface tracker is meant for live interaction. The offline version does the exact same thing as the online version.
Is there any way to play pupil capture videos using media players such as wmp/vlc? I am using the videos for training an object detection task, but am not able to open the videos in the tagging software. So is there any way to convert the videos to normal playable format?
I'm beginner. Can you please guide me how to start using this.
@user-cc65ff please check out the docs if you haven't already: https://docs.pupil-labs.com
Good afternoon I am using the mouse_control.py code to navigate through an associated web interface via extension in the chrome browser. But when I run the mouse_control.py code to use in the application the mouse movement does not compute in real time. it is in a delay in the drive or the ouse hangs. Could someone help me to solve this problem?
There should not be noticable latency if you're running the script and Pupil Capture on the same machine. My suspicion is that your machine may already be using most of the CPU available for Pupil Capture. What are the specs of your machine @user-3f0708
Hello! What does the "confidence threshold" in fixation detector settings mean? Thank you
@user-e2056a Fixations are based on gaze. Low confidence gaze can be very inaccurate. Including these can lead to a lot of false negative detections. The threshold sets the minimum confidence that a gaze datum needs in order to be considered for the fixation detection.
@wrp The specifications of my machine are a Dell brand notebbok with 8G of ram, 2.5GHz processor and Intel Core i7
Hi, sitting with a bunch of students trying to figure out why a colleagues's Pupil Pro with real sense world camera doesn't even show RGB (does not even detect). We see that in Oct. 2017, using the real sense required us to make changes and compile. Is that still true?
Hey hello everybody I am working in an experiment and I need to register data of pupil size, but not in pixels , I need mms. Has anyone done this before.? I am guessing that I need to know the distance between the camera and the pupil, but it will be changing all the time depending on the gaze, also the camera in some cases is not exactly in the front of the pupils, it has some angle. How do you deal with this isues? Finally, if it is necessary to measure the distance between camera and pupils how do you do it for it no to be unconfortable for the participants. Too, many many questions, i know. Thank you in advance for your help.
@user-90270c Which OS do you use? And does your Computer have a USB3 port?
@user-bfa5df The solution is to use our 3d model. It provides the diameter in mm. Be aware that it does not account for corneal defraction. You can read up on how this influences the acutal pupil size: https://dl.acm.org/citation.cfm?id=3204525
Hello fellow Pupil People. I'm running into an issue with Pupil Capture and having it recognize my Pupil Mobile device. I recently had to tear down my set up in which it was working fine before. I have both my computer and Pupil Mobile device connected to the same local wifi network and both computer and phone are not connected to anyother networks. When I open Pupil Capture and change the NDSI manager to Pupil Mobile I get the "No host found". I'm assuming I can change this with the Pupil Remote plugin, but I had not messed with that too much before and was able to have it work. Any advice would be extremely helpful.
@papr Thank you so much! I will explore that approach and will let you know how it went.
@papr Thanks, it solved part of my problem. Any suggestion for compesating corneal refraction?
@papr We were testing with Mac OS, but not on a USB3 port. Can you share any documentation on getting the real sense going? We want to help our ucsd colleagues, and are considering getting one of these for our own purposes. Thanks!
Just collected some data only to find that one eye does not have its corresponding time stamp file.
Anyone see this before?
@user-8779ef on v1. 7?
@user-90270c the real sense camera requires USB 3. Everything should work out of the box with the bundle for macos
Hello, I have an a question regarding the frame rate which I will be happy for some help with. We are working on syncing the 2D binocular eye tracker data and motion capture, and want to decide with which frame rate to record with the motion capture/interpolate the data to have similar frame rates. We are recording with 200 hertz for the two eye cameras , and 120 hertz for the world view camera. In the data exported from pupil player there are 400 (plus minus 2-3) timestamps with norm gaze positions. Will it be correct to interpolate the data based on the timestamps for example to 350 hertz (so that we don't up sample), and record with 350 hertz for the motion capture. Or should we use the individual timestamps for both eye camera separately based on the timestamps in the "base_data"? If the latter is better how and on which timestamp do I look, as both eyes have different timestamps in each row of the "base_data"? Thank you for your help!
@papr-- very good to know! Same true for Linux?
@user-90270c unfortunately not. You will have to install the dependencies and run from source as described in the docs
@papr Good to know. Thanks!
Thank you @user-90270c and @papr ! Trying to run the Pupil Capture from source now.
Hi guys, I just encountered a bug which causes the eye cameras to deliver a really dark picture from the eyes which can't be used for eyetracking. It just can be fixed by restoring the default settings of everything in pupil capture. I did not change any parameters, it just happend after starting pupil capture. Have you seen this before? It's a pretty nasty one, because it seems to appear "randomly" and you have to reconfigure pupil captue from scratch - a huge problem while performing an experiment.
Hi! Is there a work around for when the users have transitional eye glasses? I was told it doesnt work. Unfortunately, I am one of those who wear them constantly. This is for demo situations
@user-c351d6 Alternatively, you can delete the user settings for the eye processes separately. Close Pupil Capture. Go to the pupil_capture_settings
folder, delete user_settings_eye0
and user_settings_eye1
, start capture.
Thanks! And another question: When using Pupil Mobil in combination with Pupil Capture, we are having a huge delay between the movement of the head and the actual picture of the world camera. Is this normal? I also got the problem right now that the file I recorded caused pupil player to crash...
Morning all. I have posted a question on the research and publications forum if anyone wants to help out with their thoughts. Didn't want to repost same issue here again. Not so much an issue really. More of a sense checking on fixation output EDA... #confuseddotcom
@user-c351d6 Please send the recording to [email removed] with a note that it cuases Player to crash, and which OS and Version you are using.
Some delay is expected, yes, depending on the throughput of your wifi.
@papr Does this delay get fixed in the recording due to time sync? Because right now, head movements are first detected as eye movements because of the delay.
@papr And does this also mean we have to rely on offline calibration?
(Just the world camera is delayed, the eye cameras are not)
Each video frame has its own timestamp. Detected pupil data inherits the timestamps from their according video frames. Gaze is displayed as soon as it calculated aka. th eye video frames arrive. This can lead to delayed visualization in Capture. Calibration etc should work correctly since data is correlated based on creation time, not arrival time. All data should be correctly visualized if you open the recording in Player.
@papr Ah thanks, that was my concern. Then we just have to figure out why the recording is corrupt. I'll send you the recording. But back to calibration. Is the calibration in pupil capture synchronised?
Yes, it is. The calibration procedure accumulates pupil and reference locations. These include the correct timing. The timestamps are used for correlation before the actual mapping function is estimated.
hey guys
i have a really stupid issue
the real issue is, my Capture does't pick up any of the cameras, but the intermediate issue is, where do I look for the log file created by Capture
Hi @user-459080 What OS anv OS version are you using?
win 10 enterprise
Folder with pupil stuff is on D, in C/Users/myname there are some pupil folders with Capture settings
but no log file
@user-459080 if video feeds from cameras are not showing up in Pupil Capture (and if you are using official Pupil Labs hardware). You may want to try the following (1) Ensuring that the USBC connector is firmly plugged in - sometimes it needs a little bit more effort to connect it (2) For Windows 10 users, you will need admin privlidges to install drivers - please ensure that drivers are installed for Pupil Cam in the device manager
so i am wondering, do i get anything more than the terminal info that gets printed as the thing is running
You can open Pupil Capture as admin user and click General > Restart with default settings
Hopefullyt his will resolve issues you may be observing
Regarding the log in C:\Users\yourname\pupil_capture_settings
- this contains user settings and a log of Pupil Capture - it is what was shown in the cmd prompt in windows when you ran Pupil Capture
@papr Is it sufficient to activate the time sync plug-in just in pupil capture? No action required somewhere else? I've send you the recording which crashes.
Yes, Pupil Mobile is running time sync in the background automatically.
ok, so the front camera is still having issues
oh no, i am wrong, they all still do
I hope it is just a cable issue cause everything works on mobile with usb-c. i will figure it out tomorrow
yep, cable
I'm also getting a massiv spam of this errors
ok, mine is running. I still have to figure out how to do a calibration when recording mobile
@user-c351d6 does this also happen when you use a different USB port? What you are seeing are usb transmission errors.
@mpk It happens when using a Wifi connection to Pupil Mobile. So it's the connection between eyetracker and smartphone? It does not happen on the MacBook
Can it be due to slow wifi? Just ordered an AC wifi router to have a faaaast connection
Dear Pupil Labs Team, I would like to ask if I could assume the data of diameter_3d could represent pupil dilation?! Thank you. ^^
Hi all, I am looking at the pupil position excel sheet that gets exported from pupil player and one of the eye's data is flipped both vertically and horizontally. I am wondering if there is a way to fix this so that the exported data is consistent between eyes? any help would be appreciated.
Hi Team,
I'm having issues with the Pupil capture app: 1. Drivers updated and installed. 2. Most recent Windows Update aquired 3. Most recent Pupil labs apps downloaded. 4. Was able to record this morning no problem, then the computer got hot (has since cooled) but has not allowed me to record since.
HALP. 😬
MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version. world - [INFO] launchables.world: Application Version: 1.7.42 world - [INFO] launchables.world: System Info: User: Ginny, Platform: Windows, Machine: DESKTOP-EP8KVHA, Release: 10, Version: 10.0.17134 Running PupilDrvInst.exe --vid 1443 --pid 37424 OPT: VID number 1443 OPT: PID number 37424 Running PupilDrvInst.exe --vid 1443 --pid 37425 OPT: VID number 1443 OPT: PID number 37425 Running PupilDrvInst.exe --vid 1443 --pid 37426 OPT: VID number 1443 OPT: PID number 37426 Running PupilDrvInst.exe --vid 1133 --pid 2115 OPT: VID number 1133 OPT: PID number 2115 Running PupilDrvInst.exe --vid 6127 --pid 18447 OPT: VID number 6127 OPT: PID number 18447 Running PupilDrvInst.exe --vid 3141 --pid 25771 OPT: VID number 3141 OPT: PID number 25771 world - [ERROR] video_capture.uvc_backend: Init failed. Capture is started in ghost mode. No images will be supplied. world - [INFO] camera_models: No user calibration found for camera Ghost capture at resolution [1280, 720] world - [INFO] camera_models: No pre-recorded calibration available world - [WARNING] camera_models: Loading dummy calibration world - [WARNING] launchables.world: Process started. Running PupilDrvInst.exe --vid 1443 --pid 37424(edited)
@papr Getting an assertion error in the fix detector, line 186: "assert dispersion <= max_dispersion, 'Fixation too big: {}'.format(fixation_datum)"
I have the log output, but don't want to spam the room. It's quite long.
....unfortunately, I've had issues replicating the error! So, I wouldn't lose any sleep until we can send a few more logs.
@user-8779ef Yeah, this should not be happening. The assertion was really just to find edge cases, but I was never able to replicate this either. The next release will not include the assertion.
If I do a calibration, then close the Capture window and unplug the device, then come back later and plug in the device, should I be doing another calibration, or can I expect the calibration to reasonably last across sessions? Similarly, if someone else uses the hardware, do you need to calibrate between different users?
@user-b571eb Yes, diameter_3d
is the modelled pupil dilation of the 3d model. The correctness of the value depends on 1) how well the model is fitted, 2) 2d pupil detection and 3) the bias introduced by not modelling refraction.
@user-2686f2 You definitively need to calibrate between different users! 3d calibrations reasonably stable between single-user session but it is recommended to recalibrate.
@user-ed6bcd Try restarting with default settings. Go to General Settings > Restart with default settings
@papr Great. I'll just comment it out and make sure to inspect the resutls for oddities.
Thanks!
@papr Thanks! On that note, I've scroll through some previous chats and looked at the github issues and I saw someone say that different calibration methods are more useful for different scenarios, but there wasn't an elaboration. If this is the case, could you explain when you'd want to use one calibration method over another?
@user-464538 The right eye camera (id0) is physically flipped. Therefore the pupil data is "flipped" as well -- not really since the pupil data still corresponds to the locations within the original video. Pupil data itself is only relative to the corresponding eye camera coordinate system. There should not be a reason where you have to flip this data manually. Gaze mapping (pupil data mapped to the scene camera coordinate system) handles the flipped camera automatically.
Did anyone successfully control UI with eye Gaze in Unity, if so how ?
Hi all, have a few questions about setting up mobile eye tracking for experiments, so I'm hoping this is the place to ask?
@user-2ff80a yes it is
great, thanks. So... our lab is preparing an experimental setup with numerous sensors in a motion capture space. Initially, we were using Windows 10 (with the latest Pupil Capture release), the Moto Play Z2 (as suggested) and lab streaming layer. We then tried a Pixel tablet with a similar setup, with fewer issues.
But so far, our test recordings have indicated that the Pupil Mobile setup we are using will not be stable enough for continued use throughout the official study, so I am hoping to get some advice as to whether there are ways to improve our setup, or whether we should try an alternative setup (abandon Pupil Mobile for now)?
The biggest issues seem to be overheating (especially the Moto Play) and pixelation of the world view, that we think may result
we also wonder whether anyone has had any issues with IR interference in an optical motion capture space? I'm planning to try to add a bit of shielding around the eye camera LEDs, since we have seen them in our mocap recordings.
We are also wondering if there are minimum hardware requirements for the system running Win10, because we considering a mobile mimi-PC (like a Dynaedge, but hopefully cheaper) running pupil capture directly.
Could you post an example screenshot of what you mean by "pixelation of the world view"? In which extend is the heating of the phone problematic? Does it overheat so much, that it turns off?
sure thing. I'll search for a good example
Yes, during one recording, it was so hot it kept stopping after 3-5 minutes and the study participant had to take it out of the carry pouch to fan it so that it would operate again. we added a pocket to include a coolpack, for hotter recording days
@user-2ff80a based on your description, I'm assuming that you are streaming data from Pupil Mobile to Pupil Capture. Do I understand correctly?
yes
What kind of network do you have set up for wifi?
The Win10 machine and the WiFi router are both connected to a Netgear switch. We're currently using a TP-Link 450Mbps wireless N router dedicated to the mocap space
And here's one pixelated image
and another from shortly after
has anyone used the coordinate data from the eye tracking to make precision comparison using graphics?
@user-2ff80a We are also using a 450Mbps wireless N router but we don't have the same problems with pixalated images. However, we have problems with delays and possible lost frames which causes corrupt video files. Did you encounter problems like this? Btw. we are using a OnePlus5T which does not overheat - probably because of the more powerful processor
@user-c351d6 @user-2ff80a if possible try recording on device with later transfer. Wifi can result in delays and dropped frames...
time syncing is also possible when recording on device.
@mpk It is, but with two major problems. (1) Our recordings are 20 mins, but uncompressed files reach the file size limit at around 10 minutes. Compressed files however make the surface tracking worse. I havn't played a lot with the parameter so far because of the second major problem. (2) Our results should be used in a debreifung session around 30 minutes after the experiment. Postprecossing however, takes around 40 Minutes (including copying of files, pupil tracking, mapping, surface tracking, export). That's on a MacBook Pro from 2017. We have tried a faster Windows Gaming Laptop with much worse(!) results. Any idea how to get this experiment to run?
@user-c351d6 we also get dropped frames and inconsistent timestamps. this is one other reason we are considering switching to a mobile pc.
@mpk we also considered recording to the device, but from reading others' posts I got the impression that it is not so reliable either.
@user-2ff80a We ware also talking about a laptop in a backpack...
Which clearly would be the worst case.
@user-c351d6 true, but it seems like the safest approach. we will try to improve the setup with the pixel tablet for now, but may have to look into the mobile pc option
@user-2ff80a please give recording on device a schot. We have made this more stable in recent releases!
@user-c351d6 sorry I remember, you had other constraints! In this case I would recommend hardening the wifi setup. Use a LAN cable to connect to the host computer, use fast 5gh wifi etc...
@mpk Ok, cool... I'll test that out next. Thanks!
Guys, no love in HMD-eyes land?
BOth 2D or 3D demos are throwing errors upon calibration (after successfully connecting to the tracker). Error posted in the hmd-eyes discussion, where it belongs.
...I'll create the github issue. I was hoping for 3rd party confirmation that someone also shares the error.
We have a dedicated developer who started working on the hmd-eyes project. Improvements should come soon. Please bare with us. 😃
@papr That is GREAT news. Who is this person?
I think he did not join this channel yet.
He's hiding from me?
Smart.
THis guys shows promise!
hey! I was wondering if you could help with an issue our lab are currently facing with using pupil player. In particular, after calibration etc, we are trying to export our videos. However, in 90% of cases, the exportation freezes at some point through (but not pupil player as a whole) and the video doesn’t fully export. On repeat, the video always makes it to the same point, but each different video makes it to a different point before freezing. We are using the latest version of pupil player, so are not sure why this error has started to affect us.
@user-7d3aea Could you send an example recording to [email removed] that we cna use to reproduce this issue?
@papr Getting good reports from my folks after commenting out that assertion in the fixation detector.
No crashes yet, and I believe they processed an hour or so of data today.
I still would like to know how the assertion was triggered. The algorithm should not produce fixations longer than the maximum but the assertion triggers if it does. Please check the fixation lengths vs your maximum and let me know if you find violations of the maximum and by how much the maximum duration was violated.
Thanks - I'll do that when it comes time to analyze.
...and I'll let you know if I see anything.
Great, thank you!
Hi all, have some hardware/materials questions... Would it be possible to get more information about the material the frame parts are printed in (is it Nylon 12, or...)? and is it safe to assume the small covers situated over the eye cameras are the same material? Is there any other function to these covers beyond protecting the cameras?
@papr working on creating a demo video as we have confidential data. The error that appears when the export stops is:
Starting video export with pid: 11144 Application provided invalid, non monotonically increasing dts to muxer in stream 0: 9141814 >= 9141814 Process Export (pid: 11144) crashed with trace: Traceback (most recent call last): File “shared_modules\exporter.py, line 183, in export File “shared_modules\av_writer.py”, line 177, in write_video_frame Fiile “av\container\output.pyx”, line 201, in av.container.output.OutputContainer.mux (src\av\container\output.c:3545 File “av\container\core.pyx”, line 228, in av.container.core.ContainerProxy.err_check (src\av\container\core.cL4020) File “av\utils.pyx, line 76, in av.utils.err_check (src\av\utils.c:1666) Av.AVError: [Errno 22] Invalid argument: ‘XXXXXXXXXXXXXXXXXXXXXX.mp4’
@user-7d3aea Ok, was this a Pupil Mobile recording?
@papr yes, it was!
Did you see this issue? If not, could you try following these instructions? https://github.com/pupil-labs/pupil/issues/1203#issuecomment-396884543
It fixes the timestamps and hopefully lets you export the video
@papr Hello, I am querying python data for 10 seconds and collecting the eye gaze positions in real time. I then collect all of these values into a python list. However, the size of the list is around 4000 entries. This doesn't make sense because the camera fps is 200, and so 200*10 should equal 2000 entries.
Does this issue have anything to do with the buffer containing data from previous samples? Should I clear the buffer at the start of every trial?
@user-cd9cff this is expected if you have a binocular headset.
Then it is 10 seconds * 2 cameras * 200fps
@papr This is how I am querying the data: rx = msg['gaze_normals_3d'][0][0] ry = msg['gaze_normals_3d'][0][1] lx = msg['gaze_normals_3d'][1][0] ly = msg['gaze_normals_3d'][1][1]
Each line of data that I unpack from the msgpack has the four values of rx,ry,lx,ly. There are essentially 4000 data points of rx,ry,lx,ly, not 2000 of rx,ry and 2000 of lx,ly
Shouldn't that mean that each data point takes both cameras into consideration, and therefore should not do one data point ofr each eye seperately?
Rx,Ry: Coordinates of the right eye
Lx,Ly: Coordinates of the left eye
Question about calibration -- We want to track gaze while people are looking down at a paper form on the desk. Currently we're calibrating using screen markers, but wondered whether the same method could be used with a tablet computer lying on the desk. This would calibrate against eye positions that more closely matched what people would be doing when looking down at the form. Has this been done? Any cautions? Any obvious problems with it? Many thanks for any advice.
@papr In addition, when I have the trial run for only 3 seconds, I get almost 3000 data points, which doesn;t make sense according to the two camera calculation; I should be getting 32200 = 1200 data points
@user-cd9cff Actually, I will have to investigate this. My colleague mentioned something similar. Give me a few days.
@user-cd9cff in the meantime, could you check if you encounter duplicated timestamps?
@user-abc667 one major issue with that is that your calibration area would be very small. Gaze outside the calibration area can be very inaccurate. I would suggest manual marker calibration and moving the marker around on the desk while the subject faces the desk
@papr "one major issue with that is that your calibration area would be very small. Gaze outside the calibration area can be very inaccurate. I would suggest manual marker calibration and moving the marker around on the desk while the subject faces the desk" Happy to try this. Is there any accuracy difference in using manual markers vs screen markers? We want the best tracking accuracy we can get and in principle don't really care where the subject is looking if they are not looking at the paper form. [It's part of a psych test and we're concerned only with where they're looking on the page.] Thanks!
@user-abc667 there is no difference in the procedures from an algorithmic point of view. Be aware that the calibration area is relative to the field of view of the subject, not relative to the desk!
@papr Yes, I did encouter duplicated timestamps; there would be a group of five to six duplicated timestamps
Hey guys, what's the peak wavelength of your LED's?
850nm
Thanks @mpk
We're having trouble sourcing a poly hot mirror large enough to span the vive eye cup at 45˚ angle, but we're looking!
Our need isn't great enough for a custom run, so we're begging for old stock 😛
Greetings. We are trying to extract a synchronized set of images from our pupil labs data from two world cameras (one on a child, one on a parent). I've played around with ffmpeg for doing this, but there is some variations in the sampling frequency so when I specify the same number of frames, I'm sampling slightly different time segments.
@user-78dc8f you will need to sync by timestamps not frames. This can be done with a bit of python. We have a plan to integrate this into Pupil Player. let me check with @papr about this.
@mpk Today, I tried to run the experiment with a fast 5ghz wifi and a cable connected pc. Two out of three times one of the two eye cameras lost the connection. One with the error like last time, the usb error, and second one was the following.
@user-c351d6 can you share to full log for this? It should be in the Pupil Capture dir. Once this error happens and before you restart the app just copy the logfile and send it to us.
We will try to fix this ASAP once we know what happended.
It's aroung 500MB
The logfile?
Yes
can you see if you find the traceback of the exception you pasted in there?
Or even share the file with data[at]pupil-labs.com
Yes I can, probably it's also interessting for you when it happend first.
Agreed!
also we need to make sure logfiles dont become 500mb 😃
I also agree 😉
@mpk Just a brief question about pupil player. Dragging a long recording to pupil player takes a long time to open it. There is unfortunately no progress bar but that's not a big deal as long the user knows that it is not crashing but calculating something. It seems to be due to calculations of something which seem to be not saved after opening the file. That's unfortune because pupil player crashes sometimes. In case you need to reopen the file you don't want to wait this time again. Why are you not saving this data after finishing the calculations? It seems like you just save it when the applikations is closed properly. Thats also the same when you change settings and the application crashes.
Hm, not sure if you even save this datav in pupil player sometimes. I just recognized it always takes ages to open long recordings.
@papr Thanks for your earlier reply, and please bear with me. We're at the beginning of a major research project involving eye tracking and want to be sure we're using your technology properly from the outset, and in a way that gets the maximum gaze position accuracy.
Our subjects will be working on a pen and paper task on an 8.5 x 11 form while sitting at a table. We want to know with the best accuracy possible where on the form they are looking at each moment. (If they look elsewhere than on the paper, all we need to know is that they are not looking at the form.)
As I mentioned, we're currently using screen marker calibration from a laptop screen. I wondered whether gaze accuracy would better if the calibration was done in an orientation that more closely matched their head and eye position when they are working on the task. That's what motivated the question about doing a calibration while they were looking down at the table, eg, a screen marker calibration using a tablet lying on the table about where the paper form would be.
You mentioned that the tablet approach would produce a very small calibration area. Note that I'm thinking about a large tablet, something on the order of a Surface Book 2 with a 15" screen, ie a display size of 12.5' x 8.3". The idea would be to have something roughly the same size as the test form itself, sitting on the desk in front of the subject.
Two questions - + Would using a large table address the issue you mentioned? + Even more fundamentally, is there a chance that gaze accuracy would be improved by doing calibration in this fashion, ie while looking down at the desk (& tablet), as that mimics the subject's orientation and posture when doing the task? (And if the answer is that we just have to try this out to find out, that's ok.)
Thanks!
@papr About surfaces -- we understand and have been using the fiducial markers on our form. In looking over one of the videos
https://www.youtube.com/watch?v=bmqDGE6a9kc
the narrator mentions that the markers have an orientation. What does orientation mean in this case, and why does it matter? If I copy the markers from your image and paste them onto a test form with the same orientation they have in your image, will the orientations be correct (for whatever "correct" means)?
Similarly, what is meant by the orientation of a surface?
Many thanks for your help.
@user-abc667 Some quick thoughts. 1) Yes, calibration at the distance/ in the plane in which you want to maximize data! Build a custom calibration grid that lays on the table. Use natural feature / offline calibration mode.
I would calibrate a larger area than the book itself. remember, you are calibrating within screen space, not in world space (a head movement may cause the book to occupy a dififerent part of the screen)
You may also want to consider whether it is kosher (given your design / hypothesis / philosophy) to restrict movement at all
SOMETIMES, it's OK to use chin rests. Personally I try and avoid restricting movement in any way, physically OR through verbal instruction
Some additional tips...
Best of luck.
@user-8779ef Many thanks.
Making sure I have it -- create a sheet of paper say 12.75 x 16.5 (ie 50% larger than an 8.5 x 11), and put on it at least 9 icons of some sort so we can tell the subject to look at each one in turn, using the natural features approach, yes?
As you suggest, we do not want to restrict movement. That's the whole reason for wanting your glasses, as they are far more natural.
Thanks for the additional tips on lighting, etc.
@user-abc667 Be sure to leave a visual indication of when the person is looking at each calibration point. We use a system involving a button box with an LED on it. So, for each calibration point...
1) . The experimenter indicates the target by touches it, and them removes her/his finger from the region. 2) . The observer looks at the point, and briefly turns on the LED while looking (the led is visible in the scene camera)
Be sure to tell them to point their nose at the center target during calibraiton, and then keep their head still!
THe LED trick helps quite a bit during post-hoc processing, when you need to scrub through and define the frames on which the observer is looking at the natural feature/target.
...and, keep in mind that you will have to pilot test and maybe adjust the calibration sequence depending on your particular design.
Test run a few times and see how the track turns out!
the quality will be heavily dependent upon the algorithm settings during pupil detection. Be sure to use algorithm view. There is an art to all of this, and it will take time to learn.
(Sorry, use ROI and algorithm view of the pupil during 3D pupil detection). If that doesn't make sense now, it will eventually.
This system is not plug n'play, but it's very good.
You may have to sacrifice a few chickens and/or do a little dance.
(for good data)
...but, this is the nature of eye tracking 😃
@user-c351d6 this is known and the load speed is improved with the next release. We also improved memory management by a lot so that very long recordings can be opened on smaller machines as well.
@user-abc667 @user-8779ef you can also use the calibration marker for that. It will be detected in Pupil Plater automatically.
@mpk Thanks for that. Is there a date for the next release?
later this week is planned.
hey everyone , can anyone tell me any application based on eye tracking
@user-7f5ed2 could you be more specific with respect to "application"?
@user-7f5ed2 you can see papers/projects that use/cite Pupil here: https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/edit?usp=sharing
You can also see a number of featured projects via the Pupil Labs blog - https://pupil-labs.com/blog
The pupil-community repo also contains links to projects, forks, plugins, and scripts that are using Pupil
@mpk @papr : I sent a query yesterday about extracting sync'd video images from our child and parent world cameras. Any thoughts on this? Or should I starting writing code to do this using the timestamps? Would obviously prefer not to if you guys have something developed 😉
@user-78dc8f Unfortunately we do not have such an implementation.
Edit: I would agree with mpk to sync video frames based on the timestamps in the world_timestamps.npy
file
@papr Thanks for the quick reply. I'll get busy. Just to get your input...I was planning to use the video with fewer frames as the master and then extract the associated frames from the other one. Sound right?
What is you expected result? A picture in picture video? Or two saparated video files that are synced if started at the same time?
@papr I am extracting the video data image by image so I can process the images using a deep learning network (to do object recognition). I need to extract synchronized images from each video so that I know if mom is looking at object 1 at time X and baby is looking at object 1 at time X, that the two images are viewing the world at the same moment in time. Does that make sense?
@papr So I'm thinking of using the timestamps for the slower video stream as the master and then finding the closest timestamps from the other video that match each time stamp from the master and selecting those paired images for extraction.
Yes, that makes sense
We do something similar for the eye video overlays
@papr sounds good. I'll get busy.
@papr thanks for the quick input
@wrp application means take for example 'controlling wheelchair using pupil motion' is an eye detection further application, or you can take driver drowsiness detection for an example, so I wanna know some more applications based on real time face or eye detection.
@user-7f5ed2 please check out the links posted above, there are examples of applications according to your definition
"Hello world of Pupil", I have lost one eye camera in the middle of experiments. That pesky bastard called time isn't on side, so what adjustments should I make to ensure useful data. I am contemplating reverting to monocular thinking.... Use case involves the visual management of instrument readout on flight simulator screen. Simulator provided score based on level of recognition, since I am looking at fixations primarily at this stage, how does losing one eye cam (still dont know why by the way) affect how I look at fixation outputs. Any pitfalls to look out for?. Thanks in advance.
Pupil Capture supports monocular mapping. But be aware that a single eye camera cannot map the subject's complete field of view. E.g. if you have a camera on the right eye, and the subject looks to the left, then the mapping will be most likely be inaccurate. The reason for that is that the pupil is barely visible to the camera in these cases.
Btw, you can also buy a replacement camera on our store.
@papr, thanks for the quick response. You raise an interesting point about the field of view. I might be wrong but perhaps the movement of the head toward the "left" as per your description might compensate for any losses?... I might be clutching at straws here but given the association between the eyes (distance between pupils if you like), surely there must be some mathematical correction that can be done. I say this because the on screen calibration still works. Setup is 3x 27" screens with 100% of AOI on middle screen. Calibration is done with the 5 point on screen calibration.(This seems to be sampling well- ofcourse head is held stationary and central to FOV) and video playback does actually show fixations on item (instrument) of interest. However, I am hoping I could meet my deadlines by accounting for any losses mathematically until I can run the tests again in proper binocular mode. Again I might be clutching at straws. Thanks again.
@papr Is there a chance you can check out my questions from 4:20PM yesterday? There are a few seemingly useful replies, but I'd appreciate your perspective. Thanks!. Also, is there a chance for a phone call? I find these typed exchanges painfully slow at times. Happy to call you at your convenience if that's possible. I suspect 15 min would be plenty. (I'm at MIT, so EDT time zone.)
@user-239f8a Consider this quick sketch. This is a top-down view. As you can see the pupil will be barely visible to the camera. Without a visible pupil we cannot do any gaze mapping. There is no mathematical correction for that.
Looking forwards will work as long as the pupil is visible to the eye camera
@user-abc667 I can second everything what @user-8779ef said. I strongly advise to follow his ideas.
Unfortunately, I am not available for a call.
@papr. Thanks for correcting my simplistic thought process. It would seem binocular is the way, is there a repair service one could take advantage of? New kit out of budget at the moment unfortunately.
@user-239f8a Please write an email to [email removed] concerning that matter,
Sure thing. Your time and thoughts are very much appreciated.
Hey Guys! I am sorry for broadcasting, we just get shiny Pupil Glasses, and trying to send annotations with https://github.com/pupil-labs/pupil-helpers/blob/master/python/remote_annotations.py script. But we got nothing in annotations.csv after Pupil Player export event with annotations plugins on everywhere, why could it be possible? p.s. There no errors in code execution, code starts and stops record, so there is ZMQ connection. Thanks!!!
@user-96d0dc Could you send the recording to [email removed] I will have a look at this in the coming days.
@papr Thanks you! Sent an email from [email removed] Waiting for your replay!
The recording does not include a pupil_data
file and the info.csv file is incomplete as well. This indicates that the recording was aborted instead of being stopped gracfully.
@papr I am sorry! Will record you a new one!
I am just saying that this is the reason why you might not see any exported annotations. 😉
@papr no-no, i got it from all the records 😦
@user-96d0dc I will have a look at the second recording tomorrow
@papr Hello, I have a pupil labs camera connected to my laptop and I want to sync the clocks on both systems so that the stimulus that I am trying to run from my laptop will run on the same clock as the pupil cameras. Can you please point me towards the appropriate documentation and github repositories in order to accomplish this?
Hey, I'm trying to use pupil labs to track where a user is looking on a computer screen. Currently, the gaze data's normals give me where the user is looking with respect to the world camera, which isn't what I want. Is there an easy solution to my problem?
@user-38ff51 Look up surface tracking in the docs
@papr The problem fot the Remote Anatations was with Packet encoding. Open Seasame uses Python 2, but Pupil use Python 3. Removing use_bin_type=True from serializer.dumps helped. Thanks!
How does the pupil tracker hardware communicate with a PC?
What kind of communication protocols are used?
HI @user-24fdfb Pupil headset connects to a laptop/desktop via USBC-USBA cable. You can also connect to a select number of android devices running Pupil Mobile via USBC-USBC.
thanks @wrp
I guess your Python/c++ code on github has everything necessary to make the connection useful on the PC
If I wanted to relay information about pupil movement rates to a web service, that should be easy enough tweak into the Python code, right?
@user-24fdfb You can just download Pupil apps from https://github.com/pupil-labs/pupil/releases/latest - plug in the headset, and start Pupil Capture.
If you want to relay information to a web service you don't even need to touch the source code. You can subscribe/publish to the network based API. Please check out docs here: https://docs.pupil-labs.com/#interprocess-and-network-communication
I haven't used ZeroMQ before but at a glance, it looks like it uses a TCP/IP protocol that isn't over HTTP so the only web service I can directly link that to would be one specifically designed for ZeroMQ. Is that right?
That looks pretty helpful anyway, though. Writing a separate ZeroMQ to HTTP API relaying server sounds better than adjusting your project's code.
@user-24fdfb what explicitly are you trying to accomplish - can you provide a concrete example. Are you trying to subscribe client side only?
@user-24fdfb you might want to take a look at this repo for reference - https://github.com/hookdump/asistiva - the project is unfinished AFAIK, but has the skeleton/foundation there for a web app that communicates with Pupil Capture
Besides the mouse_controll.py script have any other code that the pupil labs suggests using with the pupil's tracking device?
@papr Just a quick question about the stuff I'm coding up (from yesterday). I found a way to use ffmpeg to extract all the images between two time points in a video. I tried this out using 1 min of data. Then I went into matlab (since I suck at python) and read in the timestamp data within the same range. Happily, I got the same number of timestamps as exported frames. Then I repeated this process with 10 min 26 sec of data (the window of data we want to process). That yielded 18676 exported images from the child using ffmpeg and I count 18677 timestamps in matlab (not bad); but for my parent video, I get 16224 exported images from ffmpeg and 13527 timestamps in matlab. That's a big difference. Any thoughts?
@papr Here's my ffmpeg command: ffmpeg -ss 00:00:55.379 -i 06NIHVWM131B_child_worldviz.mp4 -ss 00:00:30.000 -t 00:10:26.000 06NIHVWM131B_ChildFrames/06NIHVWM131B_Child%d.jpg
@papr Note that there are some bells and whistles in that command to speed extraction, but basic idea is to specify the start time and duration...
So each frame in the original video has a corresponding timestamp. You need to know at which frame indeces ffmpeg starts and stops extracting images. These indeces can be used to extract the corresponding timestamps.
The videos are saved with a fixed time between frames. This means that the duration given to ffmpeg is a different one than the duration calculated for the timestamps in matlab.
@papr Ok. I'll dig into whether I can get ffmpeg to return a vector of frame numbers it is extracting... unless you happen to know how to do that magic? I think there's a way to dump a log file or something...
You only need the start and stop indeces. But I do'nt know if ffmpeg gives that type of feedback
Idea: Don't skip to starting time. Extract all frames but instead of only numbering the frames, include the video frame timestamps in the name. This assumes that this is possible. After extracting, you can delete all files whose time is before your original starting time and now the frame index for the matlab timestamps
Hey Guys, we are tryting to setup surface tracking. So we printed markers, but them on monitor, enabled the plugin and added a surface, but it does not detect any marker and i am unable to select anything with a mouse. What may i missed?/
I highly recommend to use 1270x720 as resolution. It results in less distorted images. Additionally, you need to reduce the min_marker_perimeter
value. Reducing it to much might result in false positives though!
@user-3f0708 The mouse control script is just a very simple example. See our community repository for related work: https://github.com/pupil-labs/pupil-community
Hey all, is surface tracking the best way to track someones gaze on a computer screen? accurate enough to control a mouse?
@user-3856e9 You can test this with the example script in the Pupil Helpers repository
Ah ok I'll take a look, thanks
But to answer your other question: There is currently no other official way to track region of interests other than through surface tracking.
@papr Hello, As I have been looking at the pupil epoch clock, I have realized that the clock does not increment by each second. For example, if I run a five second trial, the clock will have incremented by twenty counts
How does the epoch clock increment?
@user-cd9cff the time unit is always seconds
@papr About surfaces -- we understand and have been using the fiducial markers. In one of the videos about using surfaces (https://www.youtube.com/watch?v=bmqDGE6a9kc) the narrator mentions that markers have an orientation. What does orientation mean in this case, and why does it matter? If I copy the markers from the image in the documentation and paste them onto a paper form with the same orientation, will the orientations be correct (for whatever "correct" means)? Similarly, what is meant by the orientation of a surface? If this is already explained in the documentation, feel free to point us there. Thanks.
@user-abc667 The markers have a top, bottom, left, and right side. This is meant by orientation.
The markers do not have to point into the same direction for the surface definition. You can download Pupil Player and the dataset from the video from our website and play around with it.
@papr [255910.14473, 0.7996776304667995, 0.5497307879030378, nan, nan] [255910.168938, 0.849932840856795, 0.5052694567434703, nan, nan]
The numbers in the first column are pupil timestamps
but they were taken five seconds apart
however, the timestamp does not reflect this
@user-cd9cff From were are you reading the timestamps?
from python
This is the gist of the oython program
Yeah, but in which context? As a plugin? As an external script? This looks suspiciously like a misinterpretation of bytes to me
I am reading it from a ppython script
The timestsamp is being queried on line 36
and being stored in the ptime variablr
Ah OK, so the first value in each list is the timestamp. But this makes sense
0.02 seconds passed between the generation of the two gaze data points that you received
they were five actual seconds apart
if name == 'main': f() time.sleep(5) f()
Ah, yes, but the subscription receives all data. Zmq caches incoming data in a fifo queue. You need to continuesly read from the socket and discard values that you do not want.
This is not a request reply pattern.
But pub-sub: http://learning-0mq-with-pyzmq.readthedocs.io/en/latest/pyzmq/patterns/pubsub.html
So then how would I get timestamps that are five second apart?
You receive all data, check each incoming datum's timestamp, and discard data that is less than 5 seconds apart
so while the loop is running, I read all data?
Yes
All data that is available. And while no data is available, the recv call blocks. This prevents that the process runs at 100%
@papr
Will download the data from video; thanks for suggestion.
Also: "Randy The markers have a top, bottom, left, and right side.... The markers do not have to point into the same direction for the surface definition."
Then what is the operational consequence of them having an orientation? (If we can ignore marker orientation, that's fine, just trying to make sure we're using the technology properly.)
@user-abc667 There is nothing much that could go wrong in terms of orientation
Doesn't this mean that the data is unreliable? I am trying to start collecting data at one point, continously collect data for five seconds and then stop
The timestamp, however, shows a different time
You can always edit the orientation of the defined surface after the effect in Pupil Player.
@user-cd9cff I don't understand. Why wouldn't the timestamp change over time?
This is what I got over a five second trial
[257181.793281, nan, nan, nan, nan] b'257181.793281' [257181.80135, nan, nan, nan, nan] b'257181.80135'
That's because you only read the first two data points of your trial instead of the first and the last.
yea sorry
I just realized that I sent you the wrong data
Zmq receives data in the background and caches it. Calling recv reads from that cache. The cache is implemented as a fifo queue
what I actually got was this: 257291.668573 257294.968804
I understand that it is a fifo qeue, but a second in the pupil epoch time takes longer than an actual second
the timestamp shows a difference of 4.3
A possible reason is that there are still 0.7 seconds worth of data in that cache queue.
Could you please share the current version of your script?
Ah never mind
This is the script that queries each data point https://gist.github.com/saipraneethmuktevi/bbdf2e298deef659149e55743b480a75
This is the sort of 'main' script that interfaces with Matlab: https://gist.github.com/saipraneethmuktevi/08e9464527800e2794394371702e2818
I found the issue. The problem is that the subscription takes a bit of time. You start your timer directly after subscribing.
The correct way would be to: 1. Subscribe 2. Sleep for 0.5-1.0 seconds 3. Ensure that the queue is empty by reading all available data 4. Start the timer 5. Continue receiving data 6. Stop after x seconds 7. Compare timestamps
another situation I am gettting is this: [257554.485408, nan, nan, nan, nan] [257728.509449, nan, nan, nan, nan]
These two timestamps were taken consecutively, yet indicate a 200 second delay
is this because of a cache that is not empty?
@user-cd9cff Timesync is disabled? What is the timestamp of the next datum?
[257728.623036, nan, nan, nan, nan]
it goes in an orderly fashion from there
should I empty the cache tho?
A jump of 200 seconds is weird. Is this reproducable?
yes
Is it always the first timestamp that is off by that much?
[257740.048154, nan, nan, nan, nan] [257833.294428, 0.49847990851838514, 0.8020016869958443, nan, nan]
no its ususally in the middle
Could you make a screenshot of Capture, please?
like this?
Yes, thanks
So your actual goal is to receive gaze data in Matlab?
Yes, and I am achieving this by compiling gaze data in python trial by trial and the submitting it to matlab
Mmh, the screenshot with the timestamps is really weird. At the bottom timestamps jumps back in time
I will have to think about this after having a bit more sleep (German timezone 😉 ). I am off for today. Let me know if you find the source of the problem.
@papr Per earlier discussion, we're looking into using natural features calibration for our task; recall that we want to track eye position for someone looking down at a test form on the tabletop.
Plan is to create a calibration form, a piece of paper roughly 12.5" x 16" with 9 small icons (eg, dog, cat...) in 3 rows. Put that on the table where the test form will be and calibrate by asking the subject to look at each icon in turn, clicking on that point in the world window (per the documentation).
a) Is this the right idea? Anything we need to change?
b) I'm trying to understand how natural features calibration works. Is the pupil position collected during this process just wherever the eyes are when the point in the world window is clicked (hence possibly noisy), or does the system look for pupil position fixation after the world window is clicked, and use that position?
Thanks!
@papr if you’re up this late answering questions in Germany, props to your dedication
@user-abc667 I would simply use a printed calibration marker and use manual marker calibration. This allows you to do automatic offline calibration. Natural feature calibration in capture is noisy since we try to track the clicked position with optical flow and that is not realy realiable.
Also: If you use pictures you will introduce label noise since there is not a single point where the subject looks at, e.g. the subject could be looking at the cat's tail or its head. Use visual stimuli where the gaze target is clear to the subject, e.g. our calibration markers have concentric circles with a single point in the middle.
@papr Got it, many thanks. Will try this. And indeed, thanks for the late nite (for you) service!
Hey everyone, while experimenting with the highspeed camera and the exposure time value the luminosity jumps from bright to a darker state while sliding the expsure time bar (126-128 and 159-160 as example). Any idea why this is happening ? Does changing the exposure time give an autommatic reaction from the sensor ?
best regards
Hello Everyone,
I accidentally ripped the cables out of one camera connector, i need to get some replacement metal pins ...does anyone know the model of the connector ? iOn the housing attached to the pcb i could read J and JKJ , any help will be appreciated !
Best Regards
@user-833165 please send an email to info@pupil-labs.com re replacement parts and/or repairs
@papr Hello! I am using the normalized 3D gaze vector, i.e gaze_normal_x/y/z. What is the origin considered for calculating each eye's normalized gaze vector? The docs says that the visual axis goes through the eye ball center and the object that's look at in the world camera coordinate system. I'm asking because I need to compute the cyclopean gaze vector.
anyone has problems and using the creen markers calibration procedure?
I activate the calibration but no fixation is ever recognized and the process is stopped on the first marker?
any ideas?
tks
@user-11dbde can you share a video?
@user-11dbde it worked with our thingy (not on the hololense). Maybe you try with ours to check if it is a software or hardware problem? I am on holiday, but Oliver should be able to give it to you
Hello, we are getting good eye tracking with adults but really bad eye tracking with preschool aged children, even though everything seems to be going ok during the calibration. We are wondering if we could adjust any settings in the detecting/processing stream that may be created more for an adult pupil to better detect a child's pupil? Thank you! Sara
Hi @user-380f66 thanks for the feedback. If possible, can you send a sample to data@pupil-labs.com so that we could provide concrete feedback (we realize this may not be possible due to privacy - but if possible, let us know). What is the accuracy reported after calibration?
Talking about the use of pupil mobile app. Do I have to use a usb-c on the phone? Or I can also use the normal micro usb port - B. I need to buy the right cable from the glasses to the smartphone, so, i wonder if the type of the phone's port is mandatory the same of that one of the glasses. Thank you
@user-c1220d This will most likely not work. See the Pupil Mobile repository for a list of supported devices.
Hey guys, we are in the middle of data collection before a big deadline and one of our headsets seems to be failing us.
One eye camera is dropping out. We have replaced the eye camera with the camera from another headset, and the problem persists. So, we think its the headset wiring, and not the camera. Given the time constraints, we're just about ready to order a new headset with super fast shipping.
However, I thought it best to check in before we do this, just in case you want us to try anything funny.
Hey @user-8779ef Please write an email to [email removed] I do not think there is much more to try.
Ok, thanks @papr .
We have a 120 hz system. Can we use that headset with the 200 Hz cameras?
....are they interchangeable?
They should be interchangeable. But it looks like the cable tree is at fault, if you tried a different camera already.
Yes, but that's good news. I believe we can use the 120 Hz system's cable tree with the 200 hz camera arms.
120 hz headwear, but take off 120 hz cameras and replace with 200 hz cameras.
It seems to be the 200 hz system's tree that is at fault.
You can test it by attaching the 200hz cam to an 120hz headset
Exactly.
Before you start ripping out cable trees...
No no no ripping out
Thanks!
what does this comment mean in mouse_control.py m.move (0, 0) # hack to init PyMouse - still needed?
@papr
Hello, were you able to figure out why the pupil timestamps would jump and not correlate with real time?
Hey, what does the "on_srf" parameter means? (In the "fixations_on_surface" files). If I want to count how many times someone fixates at certain surface, i should ignore the "on_srf = false" values?
Hello, I wanted to ask how we may calculate the depth of each time frame via Pupil Labs? We are currently piloting an experiment where we want to capture what individuals look at in their everyday lives. Thus, I have integrated all of the calibration methods (i.e., screen, manual markers, natural features) A question and concern that I have encountered is that I do not know how to calculate the actual depth of the gaze detections. I have been looking at all the documents and played around with all the plugins, but, I have been very unsuccessful in actually retrieving the depth of the gaze. I will appreciate any insight. Best, Celene
We map gaze to surfaces nonetheless if the gaze is actually on the surface. This field tells you if that gaze was within the surface bounds. @Johnny#7075 correct, ignore these
@user-cd9cff No, I have not have time yet and will probably not have any this week since I am at EuroPython in Edinburgh 🙂
thank you so much for replying
tomorrow i have time to check this problem again
@papr Thank you for your response.
@user-3f0708 that line of code was used to in it pymouse and moved the mouse to 0,0 screen coordinate at the beginning of the script. At the time the script was written this line was necessary to init pymouse but may no longer be needed. Try commenting it out and see 😸
Hi guys, is there somehow a possibility to define surface manually? For example surface "xyz" is consisting of marker 1, 2, 3, 4 and then afterwords define the position of a surface? It's quite a work to define a surface when you have a lot of marker and you just want to have a certain amount this markes related to a surface. The surface tracking also crashes every second surface definition.
@user-c351d6 there is not currently a method to define surfaces "manually" (e.g. with an external file). Crashes should be resolved with the forthcoming v1.8
release
@user-c351d6 Am I assuming correctly, that your marker setup is fixed and that you cannot show the surfaces one by one to the camera?
Exactly, we have to walk in our environment to get all the surfaces
How about creating "dummy" surfaces like on a piece of paper, that use the same markers and proportions as the original surfaces? You could register them one by one without having to walk around
Sounds like a good workaround.
hi, I'm using the pupil headset with 200Hz cameras and Pupil seem to have problems detecting its 3d eye model on one eye (the left). Also the right one is bad sometimes, but it occurs far more often on the left side. Any ideas why this happens? Maybe the positioning of the cameras isn't optimal?
Looks like the 2d pupil detection does not work well
Could you change to the "Algorithm view" in the general settings of the eye windows?
And post a screenshot of that
sure
algorithm mode
(the right is flipped upside down)
Mmh, looks actually good. Try rolling your eyes. The 3d model needs pupil positions from different locations to build up the model
yes, rolling the eyes helps, but most of the time the model "is lost again" in less than a minute
which does not occur with the right eye .. at least much less frequently
Hi, does the calibration support monocular test?
@user-63941a What do you mean by test?
We want to compare the searching ability between two eye and one eye, so the experiments will need participants closing one eye during the test. I wondered if the pupil capture support that and do we need to adjust the settings when it comes to monocular test?
I understand. Closing one eye will lead to low pupil confidence for data from this eye. The calibration procedure will discard low confident data automatically.
Okay I get it. Thanks for the information.
Hello! @papr , May i ask a question about obtaining the duration of gaze? We could obtain the "gaze_timestamp" from "gaze-position" folder, but what is the definition of gaze by Pupillab? how can we find the duration of the gaze? Thank you!
Pupil data is relative to the eye camera space. Gaze is mapped pupil data and is relative to the world/scene camera coordinate system. For each eye video frame there is exactly one pupil datum. Therefore you only have one gaze datum for each pupil datum (monocular mapping) or one gaze datum for a pair of pupil datums (binocular mapping). Therefore gaze does not have a duration but the gaze position at the time at which the eye video frame(s) were taken.
Thank you,@papr, where can I find the reference of this definition of gaze?
See https://docs.pupil-labs.com/#development-overview under Pupil Datum Format
Thank you, @papr, I was looking for the time duration between the gaze position started falling on a certain surface, and when it left that surface. However, when I look at the "surface event" file, sometimes the gaze enter one surface, followed by enter another surface without exiting the previous one( there's no overlapping between those two surfaces, is the data normal? if not, what would you suggest to improve it? thank you.
Thanks for the link https://docs.pupil-labs.com/#development-overview , @papr, is there any literature support available?
What do you mean by "literature" support?
~~If the surfaces overlap you can get "enter" events without having "leave" events.~~ Ah, you say that they do not overlap.
If you know that they do not overlap, then you can simply assume that they exited the previous surface as soon as they enter a new surface.
We cannot assume or test if the surfaces overlap. Therefore we cannot "fix" this since it can be expected behavior.
@papr Where can I find blink detection support?
Hello everyone. Didn't know if any of you have run into this ValueError before. I was exporting a video in Pupil Player and have plenty of available space so I'm not entirely sure what it is referencing. Using a Windows system and tried this on Pupil Player version 1.2.7 and Pupil Player version 1.7 and got the exact same error. Wanted to check here first before making a formal issue in the Github.
Thank you @papr, by "literature support" I mean does the method used by Pupillab to identify gaze exist in any literature?
@papr 1) Do you have a way of monitoring fixation?
2) Do you have a way of detecting blinks in data recorded remotely without pupil capture?
@user-cd9cff Pupil Capture and Pupil Player have fixation detector and blink detector plugins built in. You can classify ("detect") both blinks and fixations post-hoc with Pupil Player or in real-time with Pupil Capture.
@user-e2056a can you be more specific regarding your question: "method used by Pupil Labs to identify gaze exist in any literature?" I am confused by the terms "gaze exist" can you please clarify.
@user-e2056a See this spreadsheet with published papers that cite us: https://docs.google.com/spreadsheets/d/1ZD6HDbjzrtRNB4VB0b7GFMaXVGKZYeI0zBOBEEPwvBI/htmlview#gid=0
Good afternoon friends. Can someone among you handle data processing Pupil labs? I need your help or help.
hi again, referring my previous post (https://discordapp.com/channels/285728493612957698/285728493612957698/471298673410572288) Is it possible to "fix" the current 3d eye model - maybe at least against large changes (to still compensate for small slippage) ?
@user-82e7ab The 3d model is currently being reworked to include refraction. This will include overall model stability improvements. Unfortunately, there is no official release date yet
thanks Pablo for keeping us up-to-date
Are you guys planing to release 1.8 before the weekend?
Hi, Any idea why recordings of several minutes in capture have raw videos of each eye saved as videos of only 1 frame?
Hey @papr, I'm on an older Mac (iMac (27-inch, Mid 2010) running High Sierra, and getting an instant crash of pupil player and service upon launch. Is this a known issue, or do you have any suggested fixes?
...running release 1.7
1.6 also crashes.
Submitted a new issue.
If this is an issue with all older Macs, then perhaps you should add Mac version to the software requirements.
We are going to format the machine and try again.
@user-0e8148 it only looks like it. It was a bug where videos were saved with a huge fps number. All frames are there. Please export with Player to get correct Videos.
Have someone getting setup to run from source. Hopefully the error messages will be informative.
From a tech: "I tried Pupil Player on a clean 10.13 machine, the same make of the machine in question, with the latest version of Pupil Player and I'm getting the same error. I stand by the idea that it has something to do with the way in which they're compiling their binaries. The machine level error which is being thrown is known to be associated with compilation problems due to the way in which the code optimizer is written. Meaning, even if we were to nuke back to a clean 10.13 install there would be no change in behavior. "
So, it appears that pupil player is not compatible with older macs.
@user-8779ef running from source is not an option?
Because yes, for the bundle everything is compiled on one machine and packed into the bundle. If the processor is not compatible, the bundle will not work. Which CPU do you use exactly?
I have a question re: blink duration. When we've exported data after the fact from videos using Pupil, we've been able to get information on blink duration. However, when we are getting data in real time by subscribing to blink notifications, we don't get any information about the duration of a blink once it has ended. I'm trying to figure out how we could calculate this ourselves using the data we are receiving, but we aren't getting onset/offset messages for every blink event. Sometimes, we only get an onset, sometimes only an offset. Anyone have any suggestions about this? Thanks!
@user-8779ef pupil bundles do not work with pre Intel i3/i5/i7 macs.
Hi, I'm new to eye tracking, I studied a lot about it but if didn't find a good solution for blink detection. what your idea? I will be so happy if I find my the answer.
Hi @user-128c78 you can read about our plugin's approach to blink detection here: https://docs.pupil-labs.com/#blink-detection
hi ! We have a problem that when we start pupil capture the world camera is white
@user-feb6c2 please could you go to the world window and in the menu General > Restart with default settings
@user-feb6c2 what OS are you using?
@wrp We did this two times, but still comes up this. Windows.
are you here ?
Hi @user-feb6c2 yes, thanks for sending the image. Strangely it looks like the camera white balance or gain settings are not getting reset when you reset default settings.
Can you exit Pupil Capture and manually delete the pupil_capture_settings
folder from C:\Users\YourUserName\
and relaunch Pupil Capture
okey i try
Thanks @user-feb6c2
@user-feb6c2 any updates?
@papr @mpk Thanks guys. Makes sense. Could it really be pre-intel?!? I don't think it's THAT old, but maybe I'm wrong.
This makes me feel old.
@user-8779ef not pre intel
but pre i Series
@wrp THANKS now it is working 😃
@user-feb6c2 great - thanks for the feedback. Strange that restarting with default settings did not clear the user settings folder on Windows 10. We will look into this.
@user-8779ef Not Pre Intel but pre 'i' series.
@wrp i also would like to ask you about the manual marker calibration. Some times it works but sometimes the fixtation is not where we look. I dont understand if we have to record during the calibration for later to calibrate it offline ?
@user-feb6c2 I will be AFK soon, but want to give a quick reply before I have to go. There are two issues here from what I understand:
1. Quality of calibration - Are you using the most recent v4.0
marker from the docs? What is the accuracy reported? What is your calibration technique? Have you tried single marker calibration with manual marker?
2. Offline calibration - If you want to have the option to re-calibrate post-hoc in Pupil Player, then you should start recording prior to calibrating. This way you will record the calibration process and can then detect the calibration markers post-hoc in Pupil Player. (Note: please ensure that you also record eye videos - this option is enabled by default in the recorder plugin, but just making this note in case you turned it off).
Hey i already asked this a while back and wanted to up it again. A reply would be much appreciated !
Hey everyone,
while experimenting with the highspeed camera and the exposure time value the luminosity jumps from bright to a darker state while sliding the expsure time bar (126-128 and 159-160 as example). Any idea why this is happening ? Does changing the exposure time give an autommatic reaction from the sensor ?
best regards
@wrp , Thanks for your response to my enquiry about eye tracking with preschool aged children. I cannot provide any data from our participants because we have privacy agreements but I am happy to answer any questions about the eye cam, etc. When you ask about accuracy, are you asking about the percentage of data dropped during the calibration? For our adult participants it tends to be somewhere between 8 and 13%. For our kids it has been 38% and up.
@wrp Another thing I should mention, if it helps, is that it is really difficult for us to adjust the eye cam to get a really good view of the child's eye. I'm not sure why this is--maybe just because the child's head is smaller? But the eye always looks angled a little bit to the side in the eye cam video instead of being a direct view.
@user-380f66 I'm asking about the gaze accuracy metric not percentage of data used during calibration.
@user-103621 I see your question but don't have an immediate answer for you
Hi all, our lab is still trying to test our experimental setup using Pupil Mobile with the Moto Play Z2, and having some pretty substantial issues with low framerates, especially for the worldcam. I've tried switching off the H.264 transcoding, but don't really want to go below 1080 x 720 resolution or 30fps, if possible. So far I have also tried local recording (as suggested), which can get as good as 17fps on a cool day, or as low as 1 fps on a hot day--which happened yesterday and caused us to have to omit eye-tracker data collection altogether. Framerates and heating issues are not so bad with the Pixel tablet, but the eye cameras behave erratically when trying to do local recording with it.
So, our question remains.. is there anything more we can do to try to improve this recording setup (specifically frame rate, heat, timestamps and overall stability), or should we look at running Pupil capture on a mini-PC to assure long term reliability? Does anyone else have similar issues with heat? We'll need to maintain this setup for at least 100 participants and have a huge robotics research consortium waiting on this data, so my PI is getting pretty adamant about resolving this. I saw that there will be a new release soon, fixing some of the other errors we have been seeing, so will continue to try to improve what we have for the next week or so. After that, I should go ahead with the mobile PC purchase.
So, the next question would be: what are the minimum requirements for running pupil capture? Would an i3 processor be enough, or better to go with an i5? Considering something similar to a Lenovo Thinkcentre TINY - M73 or M710q.
@user-380f66 have you tried the eye camera arm extenders? The eye being off center in the camera image is OK, as long as all eye movements are still within the frame of the eye camera window.
@user-2ff80a thanks for the report! I understand that you are doing local recording only (no streaming?) . We have not seen this kind of behaivour here.
@ease-csl-research#3732 I would recommend the i5
So, back at getting player to run on this old Mac. When running from source, I get the error: "Error calling git: "Command '['git', 'describe', '--tags']' returned non-zero exit status 128."
Hi all, when I export the gaze positions excel file out of pupil player, what units are the gaze positions measured in? it would seem to me that the norm_pos_x and norm_pos_y should range from (-1,1) but this is not the case.
Eventually, this results in "raise ValueError('Version Error')"
Perhaps I should try cloning the git repo in a different way....
@user-85976f did you download the git repo as a zip file from release?
Yeah, did that. About to log into git and clone.
@user-85976f You should actually clone git clone
- Additionally as noted earlier some libs might not build on non i
series processors
I think they all built correctly ... we'll see.
Oh, might not build on runtime. Ok, I'll report back.
@user-464538 norm_pos_x
and norm_pos_y
can be greater and/or less than the range of 0-1 - as gaze mapping could potentially map outside the frame of the world camera
@user-2ff80a One more note. I saw that the current version of Pupil Mobile does not stop streaming once you have acceces the streams from Pupil Capture. For now just manually stop streaming the sensors in Capture. We will work on a fix. If you are only recording locally this is not an issue!
@user-2ff80a I just made a 30min 1080p h264 and 200fps binoculare local recording using Pupil Mobile without streaming in about 32deg ambient temperature.I saw smooth 25fps avg. (30fps when using 720p) Please make sure to not stream at the same time. If your problem persists lets do a video debug session!
@user-2ff80a I did see that Pupil Player does skip frames when the CPU is not fast enough during playback. The fps graph in the top left shows the rendered frames so this can be misleading. Maybe this is why is looks slow? I find playing it back on a faster machine helps. Also video exports will not be skipping frames.
Hello all, just installed the mobile app and wanted some clarification on the intended functionality. After transferring the locally recorded files/project folder from the Z2 to my desktop the "Player" software displayed "no fixations available." There doesn't look to be any visualization data at all. No gaze lines or circles are present. The issue is unique to "Mobile" app as my primary setup using a tablet with "Capture" software to record sessions presents as expected. Is Pupil Mobile solely a companion app to "Capture" for the purpose of streaming and previewing video OR can it function as a standalone means of recording video and data for post-session analysis in "Player"? Perhaps there a setting I was supposed to enable but didn't? Couldn't find much in the documentation and understanding it is in alpha I turn to you fine folks for any thoughts. I am just getting started with the Pupil hardware and software so any help is greatly appreciated. Regards.
@user-b04ab9 you are exactly right, the mobile app only records and streams but does not do any calculations. See the docs on offline pupil detection and calibration
Hi all, I was attempting to teach one of my colleagues how to use the eye tracker when one of the eye cameras (eye 1)and the world camera stopped working. The world camera has since come back on but the eye 1 is still out with it listed as "unknown" in the Local UVC sources, is this a hardware issue?
ooh, ok eye 1 did just come back online but is now off again
well because the cameras do sometimes work i'm thinking it might be a loose wire or something similar, any help would be greatly appreciated
@user-8a8051 what OS are you using? Are cables firmly connected?
aah sorry macbook air, OS Sierra 10.12.6,
it was working fine for about 30 minutes before it stopped quite suddenly and as far as i can tell all the cables still appear to be connected
@user-8a8051 send us an email [email removed] and we can schedule a time for remote debug and/or return for repair if needed. Please also send your order_id
number in the email if possible
thanks, ill send through shortly
Ok, thanks @user-8a8051
@here 📣 Announcement 📣 - We have just posted the v1.8
release of Pupil Software. It is available here: https://github.com/pupil-labs/pupil/releases/tag/v1.8
We recommend that you update to v1.8
. Please let us know if you have any feedback 😄
I already did some test's with V1.8 and it looks great so far. Good Job!
@user-c351d6 great to hear!
Hello everyone, I wanted to ask if there is a way to get all the data from the video captured by the world camera?
@mpk we originally tried streaming, but the timestamp regularity and video quality were not reliable. Streaming over WiFi also increases the heat of the Moto Play, by quite a lot. When I reported the issues here (a week or so ago) the suggestion was to try local recording.
@user-f81efb What do you mean by "all the data". Videos are saved during recordings.
@papr Thanks! hopefully it doesn't come to that... I see some of the later messages from @mpk are addressing the issues we've had, so I'll test out the new release today.
@papr i meant can we get the depth paramaters as a csv file maybe
*parameters
What depth parameters in particular? Do you mean depth value of 3d gaze?
Or are you using the Realsense camera?
I know we can get the depth parameters with the gaze, but is it possible to get values for all coordinates irrespective of gaze
@user-f81efb which scene camera configuration are you using? The Realsense 3d or the high speed camera?
the realsense 3D
@mpk thanks for the additional testing. I'd tried to stop streaming while doing local recording, but it repeatedly started back up after a few moments. I'm going to get the latest release and see if we get improvements, based on your instructions. Thanks again!
@user-f81efb Ok, the realsense 3d camera returns 16bit depth/gray values. There is no video format that can save this values. Therefore we convert the 16bit depth/gray values to 8bit rgb values and save these images instead. There is a flag in the Realsense Source
menu to enable/disable this option.
Okay! I shall check it. Thank you
@user-f81efb you can preview the conversion as well. The option is part of the same menu
@mpk I will also check out the Pupil Player issues you mentioned. The FPS reported in the upper left didn't always make sense based on what we were seeing, so I usually went by that shown in the file source 'Frame rate' field. The computer we run Capture on has an i7-6700, so it should not be an issue, I guess? Will also try video export, as you suggested, and see how that affects the frame skipping.
@papr how can i access the values for 16bit depth/gray values?
i am using pupil capture on mac OS
@user-f81efb You will need to modify the backend to do so.
If I remember correctly. Let me check that after lunch.
@papr Got it. Thank you for pointing me in the right direction.
@user-f81efb I was wrong! Let me write up a small example plugin that show cases the depth frame access.
@user-f81efb This is the example: https://gist.github.com/papr/0f13943e2aebd768ab6b1508d466caae
plugins
folder within your capture settings folderDepth Frame Accessor
within the Plugin Manager
menuThis example requires python >=3.6 since I used f-strings in line 12.
Hey Guys! I have big record (30 mins) pupil_data is about 1.1GB and player is hangs on Windows and grashes on linux
Here is the log
➜ ~ pupil_player
MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version.
player - [ERROR] player_methods: No valid dir supplied (/opt/pupil_player/pupil_player)
player - [INFO] launchables.player: Session setting are from a different version of this app. I will not use those.
player - [INFO] launchables.player: Starting new session with '/media/timopheym/Seagate Backup Plus Drive/Record data/Recovered data 07-26 17_28_12/result/disk/GoogleDisk/shebekeno/Record data/clear/23_july/eye_tracking/004'
player - [INFO] player_methods: Updating meta info
player - [INFO] player_methods: Checking for world-less recording
player - [INFO] player_methods: Updating recording from v1.4 to v1.8
player - [ERROR] launchables.player: Process player_drop crashed with trace:
Traceback (most recent call last):
File "launchables/player.py", line 646, in player_drop
File "shared_modules/player_methods.py", line 234, in update_recording_to_recent
File "shared_modules/player_methods.py", line 599, in update_recording_v14_v18
File "shared_modules/file_methods.py", line 146, in append
KeyError: 'topic'
What can i do with that? I can send google drive forlder withone of records.
Thanks!
I will have a look at it, thanks
Please! We have 40 records, and need to open it in OGAMA to make a report, and it's looks like impossible now =((
Before installing the latest version of pupil software, should I uninstall the old versions?
@user-d16d74 This is not necessary
Thanks!
@user-d79ff5 I will write you a PM in order to not spam everyone with the details. I will share significant results here.
👍
Ok, turns out that his recordings are older and that they had their surface key named differently than it was the case in v1.7. We will update the bundle soon.
@papr When could it be possible to run it from sources? We don't have much time. I succeed to eport one record with 20GB Swap, but with a lot of pain with prev. version...
hello, i'm new to this. Please can you guide me how to start using this? i would be grateful to you.
@user-d79ff5 the fix was pushed to master this afternoon. You should be able to run from source already
@user-cc65ff see the link above
Hi @papr is there an demo video which I can import into the pupil player and have a play around, explore with? I don't possess any glasses yet but awaiting approval from our finance department... =)
@papr Hello! New version hasn’t helped much. We tried to upload video in 1.7.1 version and 1.8 and it crashes in upload process ( error in transformation of world file). In 1.4 everything is ok, except for video in mp4, which cannot be played after export.
Hi @user-2c0e1f we will look into this on Monday and will likely update the v1.8
release with new bundles with hotfixes to recent issues posted.
@wrp Ok, thanks
@user-24e31b You here is a sample binocular dataset that you can open in Pupil Player - https://drive.google.com/file/d/0Byap58sXjMVfUFZMdUZzdTdjaFE/view?usp=sharing
@user-2c0e1f thanks for the report.
hi, I' just tried to run Pupil on a Windows 8.1 machine - without success. Are there any additional requirements compared to Windows 10? (I haven't had any problems on a Windows 10 machine, but now I have to use this system) Pupil Capture prints the following to its console:
C:\Users\show\Downloads\pupil_v1.8-16-g0ab50f4_windows_x64\pupil_capture_windows_x64_v1.8-16-g0ab50f4>pupil_capture.exe
MainProcess - [INFO] os_utils: Disabling idle sleep not supported on this OS version.
world - [INFO] launchables.world: Application Version: 1.8.16
world - [INFO] launchables.world: System Info: User: show, Platform: Windows, Machine: MASTER, Release: 8.1, Version: 6.3.9600
Running PupilDrvInst.exe --vid 1443 --pid 37424
OPT: VID number 1443
OPT: PID number 37424
...
world - [ERROR] video_capture.uvc_backend: Init failed. Capture is started in ghost mode. No images will be supplied.
world - [INFO] camera_models: No user calibration found for camera Ghost capture at resolution [1280, 720]
world - [INFO] camera_models: No pre-recorded calibration available
world - [WARNING] camera_models: Loading dummy calibration
world - [WARNING] launchables.world: Process started.
Running PupilDrvInst.exe --vid 1443 --pid 37424
OPT: VID number 1443
OPT: PID number 37424
...
eye0 - [ERROR] video_capture.uvc_backend: Init failed. Capture is started in ghost mode. No images will be supplied.
eye0 - [INFO] camera_models: No user calibration found for camera Ghost capture at resolution [320, 240]
eye0 - [INFO] camera_models: No pre-recorded calibration available
eye0 - [WARNING] camera_models: Loading dummy calibration
Running PupilDrvInst.exe --vid 1443 --pid 37424
OPT: VID number 1443
OPT: PID number 37424
...
A few tries ago there also was an error about missing powershell version 5. It's not showing up anymore but maybe this helps?!
the powershell error showed up again:
powershell.exe -version 5 -Command "Remove-Item C:\Users\show\Downloads\pupil_v1.7-42-7ce62c8_windows_x64\pupil_capture_windows_x64_v1.7-42-7ce62c8\win_drv -recurse -Force;Start-Process PupilDrvInst.exe -Wait -WorkingDirectory \"C:\Users\show\Downloads\pupil_v1.7-42-7ce62c8_windows_x64\pupil_capture_windows_x64_v1.7-42-7ce62c8\" -ArgumentList '--vid 3141 --pid 25771 --desc \"Pupil Cam2 ID0\" --vendor \"Pupil Labs\" --inst' -Verb runas;"
world - [WARNING] video_capture.uvc_backend: Updating drivers, please wait...
Die Windows PowerShell-Version 5 kann nicht gestartet werden, da sie nicht installiert ist.
world - [WARNING] video_capture.uvc_backend: Done updating drivers!
world - [ERROR] video_capture.uvc_backend: Init failed. Capture is started in ghost mode. No images will be supplied.
world - [INFO] camera_models: No user calibration found for camera Ghost capture at resolution [1280, 720]
world - [INFO] camera_models: No pre-recorded calibration available
world - [WARNING] camera_models: Loading dummy calibration
world - [WARNING] launchables.world: Process started.
Running PupilDrvInst.exe --vid 1443 --pid 37424
OPT: VID number 1443
OPT: PID number 37424
...
Hi, Tnk u so much. Do you know starburst algorithm? what's your idea to find the start point if i use this algorithm?
You seem to require powershell version 5. The Windows 8.1 instance I have to use here had only version 4 installed. Resolution was to install Windows Management Framework 5.1 (https://www.microsoft.com/en-us/download/details.aspx?id=54616)
Thanks @user-82e7ab for following up on this. Please note that we only support Windows 10 - but nice to see your workaround.
you do not support Windows 10?
@user-82e7ab We support ONLY Windows 10
😃
@user-82e7ab He meant to say we don't support Windows 8
I thought so 😉
the topic
of zmq transmitted gaze data (first part of a multi message received after subscription to gaze
) changed from gaze
to gaze.3d.{0,1,01}
..
Now it matches the subject contained in the second part of the message, so I see that this makes more sense, but it took me some time to find this change.. maybe you could add a short note on this to the 1.8 release notes?
Unless there's some other kind of changelog I missed.
@user-82e7ab This is an implicit outcome of what has been described under Developers notes > API changes
in the release notes
ah I see .. the changes to zmq_tools.Msg_Streamer.send()
anyway .. I like the change as it gives more consistency in the data.
Also the tracking is much more stable (from first short test runs).
Nice work - thx!
btw .. is it possible to stream gaze data from pupil player?
No, streaming from player is not possible
ok, maybe you could think about this for a future release? It may be a personal preference, but I like the workflow of capturing some exemplary movements and then being able to use these in a loop while developing .. Yes- it's only a convenience feature, but a nice one imho - no need to put up and down an HMD every time testing a small code change and (especially in days like these) the cameras don't need to run and won't heat up that much.
If it is just for testing, you can use the Video File Source
backend in the eye windows. Select the backend and drag&drop the eye videos onto the eye windows.
Unfortunately, there is no way to sync the video playback but the frames are published using the recorded timestamps
ah that would be enough for my case .. thx ; )
@papr Hello! What is the average frame duration when recording in 200 Hz mode?
200 hz result in an average of about 0.005 (1/200) seconds per frame
I'm looking at the blink detection plugin offline detector, and am trying to understand all that is happening within it. Can anyone explain what the filter/filter response is, and how that is used to determine the blink confidence? I'm combing through all of the code, but I'm having a hard time understanding this part of it.
@user-988d86 the whole idea is based on binocular confidence drops during blinks. We find them by applying a step filter to the confidence signal. The result shows us when the confidence dropped and spiked. The time between these events is the blink.
The online blink detector applies the filter on a moving window, while the offline detector applies it to the whole recording.
@papr I heard that there was a problem with model variation when it came to collecting gaze data. Does that problem still exist? Will the model of the eye change from time to time when querying gaze data?
If so, then what is the best way to query the position of an eye? I am using norm_pos, but it is not precise enough
Hello everyone! My name is Yulia! Friends are looking for people who also work with the data. Need a little help.
hi every body, i have a question again 😦 in eye tracking, the pupil center is jumping alot and it isn't a fix point. what should i do?
@user-cd9cff Yes, the 3d model can change if new observations do not fit the old model anymore. This is not a problem but a feature to improve accuracy over time. The 3d model only has 3 paramaters: x/y/z coordinates of the eye sphere center. This is what we call the position of the eye. norm_pos
is not the eye position but the normalized 2d gaze location within the scene camera video frame. Gaze estimation has an expected error of 0.5-1.0 degrees. As reference: 1 deg corresponds to the width of your thumb if stretch out your arm hold your thumb up.
@user-128c78 Could you provide an example recording and send it to [email removed] Please record the calibration procedure as well.
@papr Do you plan release new version with the fix?
@user-d79ff5 Which fix?
Ah I remember. As I said, the fix is released on github if you want to run from source. I will have to check with my collegues for the bundled app release though. Thanks for reminding me.
@mpk will try and release a new bundle that includes the fix today
@papr i tried to use recorded videos as source instead of local usb and it worked, thx again. Still, maybe you can add the option to loop a video file source in a future release?
@user-82e7ab there is a loop option in the video file backend UI. Check the sidebar 😃
😱
sorry 😔
@user-d79ff5 @papr I just uploaded bundles for Linux and Mac with fix.
Windows will follow in the next 24hrs.
@papr So, if I want to track the eye position so that it does not stray from a dot in the center of the screen, it would be better to use gaze_point , gaze_normals, or something else?
@user-cd9cff norm_pos is the derived 2d position from gaze_point
If you simply want to visualize the gaze within the recorded scene video, norm_pos is the way to go.
v1.8-22
bundles have been uploaded and are available via the v1.8
release page - https://github.com/pupil-labs/pupil/releases/latest