Hi @user-c563fc, Pupil currently only runs with Python 3.6 on Windows.
@user-c563fc please make sure to follow the setup instructions closely. They should tell you to use Python 3.6. https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md
@user-b292f7 The fixation detector is based on a minimum-duration-maximum-dispersion algorithm. In your case, I would guess that the maximum dispersion is set to 1.6 degrees
and where can I change it to 3 for example?
@user-b292f7 In the "Fixation Detector" menu on the right.
So I am not having the best time with the surface tracker plug-in. It was hit for the first export, but the player won't recognize the legacy square markers for at least 2 other files. Would you recommend just copy pasting the surface definition file into the other data folders?
@user-908b50 Have you tried reducing the minimum marker perimeter?
@user-908b50 I would recommend copying the surface definitions if they are the same across multiple recordings. But if the markers are not being detected, the surface cannot be tracked.
@papr i will try that! I wanted to keep the min perimeter the same for all recordings. does it matter? the surface is the same. I am using the same width x height pixel dimensions for all data.
The perimeter can be set individually for each recording. It is a trade off between marker size and false positive marker detections.
So lower perimeter, higher chance of false positives? I am going to try copy pasting the surface definitions first.
Is there a way to double check the settings used during an export?
Correct. Unfortunately, the surface tracker does not export its configuration
yes, that is very unfortunate! I had been playing around the settings between different exports so I am little unsure about the min perimeter I used.
You should see the effect of any change to the parameter in Realtime though.
true, but that won't give me much information about the export I have right?
nvm, I am using the default, recommended perimeter of 60, figured it out.
Re-using the surface definition did not help and neither did changing the minimum parameter. I get this error:
Background Video Processor - [INFO] camera_models: Loading previously recorded intrinsics...
/home/fiza/pupil/pupil_src/shared_modules/square_marker_detect.py:174: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
contours = np.array(contours)
I think this was because I unclicked inverted markers in an attempt to re-start finding the surface markers again. Please correct me if I am wrong! Anyway, the square legacy markers are not detectable.
@user-c563fc please make sure to follow the setup instructions closely. They should tell you to use Python 3.6. https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md @user-c5fb8b I followed the setup instructions and now I am getting this error ImportError: DLL load failed: %1 is not a valid Win32 application.
and this link is also broken in instructions Download FFMPEG v4.0 Windows shared binaries from ffmpeg
th
@user-c5fb8b I followed the setup instructions and now I am getting this error ImportError: DLL load failed: %1 is not a valid Win32 application. @user-c563fc I solved it the issue, it was because FFMPEG dll files and FFMPEG is not maintained anymore and I have to go to web archive to download the old version of FFMPEG to find the those dll from 2018
Hi Guys, Just a general question. While doing the diy, we need to remove the ir filter from the lens. Why is that? Don't the software works without removing the ir filter.
Just a question on the different pupil player versions! Since I collected data using version 1.11 and now I'm using v2.4.3 to analyze data, do the performance gains in pupil labs software translate to post-hoc offline processing and analyses as well?
@user-908b50 are you able to share one of the non-detectable recordings with us? I can have a look if you want.
@user-908b50 also, what performance improvements do you talk about in particular?
@user-d8853d the software works best on IR images because the Pupil is much darker than the surrounding than in visible light. Removing the filter allows IR light to pass onto the sensor. Usually, IR light is blocked to only capture visible light.
Hello
Apart from manually starting and stoping the system, is there a way to tell the system to stop, pause or stop?
Like something that make sound signals?
Hi @user-7daa32, what do you mean with "the system"? The whole application? Pupil Capture or Pupil Player? The calibration?
@user-7daa32 You can start and stop recordings and calibrations via the network api https://docs.pupil-labs.com/developer/core/network-api/#pupil-remote
Hello! I'm a new user of Core, we are just beginning to set up an eyetracking experiment with it, and I am quite lost regarding a few things. After calibrating, whichever method I use, the tracking marker is always shifted to one side. For example, if the wearer say she is looking at my face, based on the captured video it's like she is in fact looking at someone sitting next to me. Probably there is an easy solution to this, I just can't find the source.
@user-690703 What accuracy is reported in the accuracy visualizer menu?
Hi @user-690703, welcome to the community! Did you have a look at the best practices section in our docs? It's a good starting point for new users: https://docs.pupil-labs.com/core/best-practices/
A warm welcome from my part as well 🙂
Thank you! 🙂 Yes, I've checked out the documentation, and I guess I'm still missing something.
@papr Should I look at the Angular accuracy?
@user-690703 yes please
It's 2.39 in this case.
@user-690703 That is fairly high. As a reference, one degree corresponds roughly to the width of your thumb at arm's length if you hold your arm straight in front of you.
@user-690703 I think it would be easiest for us to give feedback if you shared an example recording of you calibrating with [email removed] Just hit the R
button before starting to calibrate.
If I use screen marker choreo, it's much better, close to 1, but we are examining faces during interaction, so I figured another choreography would be much better in this case.
Thank you, I'll do that!
@user-690703 Feel free to record the calibration with which you have trouble getting good accuracy.
@user-7daa32 You can start and stop recordings and calibrations via the network api https://docs.pupil-labs.com/developer/core/network-api/#pupil-remote @papr Thanks. This means I will do a code to do this.
How to tell the system to start or stop should be the calibration and recording. Just a goal given from our group. Thanks for link. Assuming the information from that link is the solution, it means that I will write a code which I can't do. I will see if my PI understand it. I will as well ask what he wants to start or stop.
I thought we wouldn't have problem by just hitting the C and R buttons
@user-908b50 are you able to share one of the non-detectable recordings with us? I can have a look if you want. @papr There are quite a few recordings! I will email one or two to you. I shouldn't think ethics should have a problem since they are de-indentified....fingers crossed! Could you kindly re-share your email? Excepting the first one, every recording is now coming as undetectable for whatever odd reason. In terms of performance gains, I meant fixation detection and pupil positions. Since I collected the data using an older version, is it worth struggling through the the surface tracking to export data using the newer version?
@user-908b50 data@pupil-labs.com
@user-908b50 let me check out the recordings first. I will let you know.
Hi all, i'm having an issue with getting my pupil capture application to recognise a video source, i have the Pupil labs HTCVive add on plugged in via a USB 3.0 connection, i'm using Windows 7 and i can see the Pupil Cam1 ID0 and ID1 in my device manager under imaging devices. Should i expected any other devices to popup in my manager? I attempted to uninstall the drivers and open pupil capture as an administrator to reinstall the drivers but had no luck. (Let me know if i should direct this to the vr-ar chat)
@user-020426 Hey, unfortunately, we do not support other Windows versions than Windows 10. Please upgrade or use one of the other supported operating systems: Ubuntu 18.04 or newer, or macOS High Sierra 10.13 or newer
Ahhh, that will explain my issue, thank you for letting me know @papr and i'll look at upgrading now.
Hello, i've just started working with Pupil Labs Core for my engineering diploma, when I started Pupil Core as administrator (after drivers installation for cameras by Windows 10) and one of communicates in console reads: "video_capture.uvc_backend: Hardware timestamps not supported for Logitech Webcam C930e. Using software timestamps." And second: "pyre.pyre_node: Group default-time_sync-v1 not found". I have in Pupil Core application only image from main camera, but camera for both eyes doesn't work. What i should do?
After next reconnecting via USB i got messages "pyre.pyre_node: Peer None isn't ready" for both eyes.
I've tried uninstall and install drivers for device according to instructions from your website https://docs.pupil-labs.com/core/software/pupil-capture/#troubleshooting
in second Pupil Labs Core (not pro) i have problems too. But in this case don't work main and second camera. For all devices i get message "video_capture.uvc_backend: Could not connect to device! No images will be supplied."
Hello there,
I have a question. Is there any way to read the data of the eye tracking glasses "pupil core" directly from a computer via USB? Or do you need a remote access via API?
@user-467cb9 This looks like the camera drivers for the eye cameras were not installed correctly. Which version of Pupil Capture and which hardware configuration do you use?
@user-a6e660 The glasses only provide the camera images via USB. The eye tracking results are generated by software on the computer. To access the results, you can either build a plugin for the software or access it via the Network API (recommended).
Thanks for the answer. I have already informed myself about the access to the real-time data (IPC backbone, PUB-SUB pattern of ZeroMQ for one-to-many communication). However, I could not successfully implement this in the project. Is there any other way to read the data as easy as possible to evaluate it afterwards?
@user-a6e660 If you do not need to access the data in real-time, the recommended work flow is to export the recording to CSV files with Pupil Player: https://docs.pupil-labs.com/core/software/pupil-player/
Hello
I think the other times, I wasn't clear on what I wanted or rely wanted because I didn't understand it from my PI.
Here is it
We have a starting point. While at the start point, the system will tell a participant to look at A (maybe in the form of a sound. The first sound should mean this) and then we will have a sound signaling the fixation on area of interest ( just like the screen marker calibration process, the stop signal is when the marker move to another spot), the next is a dead time when the participant returns back to the starting point. The process will start again for another area of interest to be searched for. All these will be in one video for that particular participant. I don't know how feasible this can be done
Yes @user-a6e660, I currently have the same problem... I would like to read out the data in real-time, but I don't have a solution for this problem. I cannot successfully install the network API on my computer. Does anyone have any instructions or tips on how I can do this best ? The instructions on Pupil Labs are very vague... Thanks.
@user-b7ea86 Let me try to clarify: The network API works based on a server-client principle where Pupil Capture is the server and your script/program/experiment is the client. The communication requires 2 library dependencies: zeromq and msgpack. Both are available for a multitude of programming languages.
If you were to use Python as your programming language of choice, you would have to do the following things on the client computer:
pip install pyzmq
pip install msgpack==0.5.6
Therefore, you cannot install the Network API itself on your computer, only its dependencies. I hope this has clarified your questions.
Please be aware that you can run the client script on the same computer as Pupil Capture. If you run them on separate computers you will have to address the server ip address in the client script.
@user-7daa32 Pupil Capture can not be used to create visual or auditory stimuli. Usually, you would use an experiment-building software like PsychoPy. After building the experiment, you would integrate the network API into your experiment code such that the experiment can control Pupil Capture remotely, including sending timestamps to Capture when it presents stimuli to the subject (often called triggers or annotations, see https://docs.pupil-labs.com/developer/core/network-api/#remote-annotations)
@user-b7ea86 and @user-a6e660 - here's some of my code that may help you along. There's a python class for pupil core and another for grabbing data in real time. It may not be exactly what you need, but I hope you find it useful. You could use it to grab 10 s of pupil data as follows:
p = PupilCore() pg = PupilGrabber(p, 'pupil.1.3d', sec=10) pg.start() sleep(10) data = pg.get('diameter_3d')
It would be great to get some feedback on this if you do find it useful, as this is my own development against the remote helper and Network API.
@user-430fc1 Do you need the R/r commands for this to work? If not, I would remove them to make this an even more minimal example.
@user-430fc1 Also, in case you have a repository for this code, feel free to link to it in https://github.com/pupil-labs/pupil-community
@papr I suppose not - I always assumed they were necessary
R/r start and stop recordings. If you just want real-time data without a recording, you do not need to start a recording.
@papr this is part of a larger repo that will be made available before long. I'll be sure to link it when the time comes.
Hi, can I change the maximum dispersion more than 4.9? I try do so and it dosent work
Hi everyone: I would like to ask what the horizontal and vertical coordinates of the FOV of the scene camera?
thank you!
@user-b292f7 No, this is not possible if you want to use the fixation detector via the UI. More than 4 degrees are usually considered a very large dispersion; not necessarily representing fixations.
@user-1768fa
# wide-angle lens on high-speed camera
FOVinDeg(resolution=(1920, 1080), horizontal=139, vertical=83)
FOVinDeg(resolution=(1280, 720), horizontal=99, vertical=53)
FOVinDeg(resolution=(640, 480), horizontal=100, vertical=74)
is there a limit for the duration of the fixations? it seems that I need more than 4 degrees for dispersion,
and its look the same with 4.9
@user-b292f7 By increasing the dispersion limit, you are grouping smaller fixations into bigger ones. This is why you see less fixations with a higher dispersion limit.
I suggest giving the default values a try. 🙂
@papr Hi... Do you have a construction map of the scene camera? I think I accidentally broke it up...
@user-1768fa If you are lucky @user-755e9e might be able to help you with that. But I cannot make any promises.
thank you sir @papr and @user-755e9e , can't wait for your replying..
Thank you, when I was with the default values its look the same with 1.5 on the top' maybe I need to change somthing else..
@user-b292f7 what specifically are you trying to achieve? Or what was your expected result when looking at the fixations?
hi sir @papr if it possible, may I buy a brand new scene camera?
@user-1768fa please contact info@pupil-labs.com in this regard
@user-c5fb8b I try to comper data before and after learning and try looking the parameters(dispersion, duration and diameter...) its seem that I have higher durations of fixation than the default...so I play with that also...
@user-b292f7 ok, as @papr mentioned above, the higher you set the maximum dispersion, the longer the detected fixations will become, which might not be what you actually want. If I recall correctly, fixations are usually only very short, in the order of a few 100 milliseconds. The higher you set the maximum dispersion, the more likely you are to detect something as "fixation", which actually is none. That being said, I'm no eye-movement researcher and I would recommend to check with the relevant literature again.
Thank you @user-c5fb8b and @papr. when I use the default I get somthing that semm not good so I try change the limit of duration and fixation but maybe its not the direction ...
@user-b292f7 what exactly do you mean by "seem not good"? How do you judge whether your settings result in good or bad fixation detections?
its because my partners and advisor said that i need to change somthing till I wont see artificial dispersionשt the top of the graph (the same with duration)
excuse me, may i ask the information of this camera? @papr
Maximum pixel, FOV and fps...
and i could not contact with @pupil-labs.com ..
Hi, would like to know if the prescription glasses affects the pupil diameter calculated by the software. We are going to design an experimental protocol.
@user-1768fa you can find resolution and frame rates in our website. The latest fov numbers are those that I posted above. What do you mean by you could not contact [email removed]
hi @papr there are 2 kinds of camera, i can not identify which one is 100 FOV one , and which one is 60 FOV one.
@user-1768fa these fov are out of date and will be updated in the near future. Please see the values I posted above. They refer to the wide angle lens which is used in the pictures that you have posted so far
and is it a email address of [email removed]
@user-1768fa yes
thank you!! @papr
@user-6b3ffb we recommend having the eye camera below or inside the prescription lenses.
@papr Recently, I am reading and thinking about the gaze calibration code. But I have some confused about https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L203-L213 . For eye_camera_to_world_matrix
T = [R t], I don't know why we need to calculate t=t' + R*(-sphere_center_pos)
in get_eye_cam_pose_in_world
. The t'
is eye_hardcoded_translation
and it also is the eye camera center position in world.
The t
may translate the sphere_center
to the eye camera position in https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/gazer_3d/gazer_headset.py#L253-L254 because of sphere_center*R + t = sphere_center_center*R + t' + R*(-sphere_center_pos) = t'
. Thus, the s0_center
is consistent with the eye camera center position in world t
. I don't know where my misunderstanding is. Please point out it. Thank you very much.
@user-1ccccf Btw, we found a reason why the confidence is not consistent across OS. The reason is that the 3d detector is built for running in realtime, and updates models based on time passed in realtime instead of timestamps. This means that the model will update less often compared to the total amount of frames if the detection runs faster.
Edit: This is at least one possible reason for the different outcomes.
The Figure.
@papr Thank you for the explain. I found a reason why there are higher confidence on Linux. But it sounds incredible.
After the getIntersectedCircle
finished in the https://github.com/pupil-labs/pupil-detectors/blob/master/src/singleeyefitter/EyeModel.cpp#L116-L117 , the unprojectedCircle
value is equal to circle
value on Linux, which causes the oberservationFit.value = 1
incalculateModelOberservationFit
. I was stunned by what I observed, but it did happen. There is normal value on windows.
I also think it impossible. Because the unprojectedCircle
is const value. But it did happen on Linux. It happened on my computer at least.
@user-1ccccf ok, thank you. I will have a look at both, the finding as well as your question above. Unfortunately, I will not be able to come back to you in this regard today anymore.
OK, thank you very much. Just reply to me at your convenience
@user-1ccccf Btw, we found a reason why the confidence is not consistent across OS. The reason is that the 3d detector is built for running in realtime, and updates models based on time passed in realtime instead of timestamps. This means that the model will update less often compared to the total amount of frames if the detection runs faster.
Edit: This is at least one possible reason for the different outcomes. @papr Yes, I also notice the problem. So I often use the realtime as the timestamp. Therefore, I just need to keep the time at 30FPS for updating models. You mentioned the realtime: https://github.com/pupil-labs/pupil-detectors/blob/master/src/singleeyefitter/EyeModel.cpp#L151-L152
how to add world video in pupil player?
for me, world video is separate screen recording
@user-c563fc hey, please be aware that the Pupil software is not meant for remote eye tracking, which seems to be your goal :)
Yeah, Because I am using HTC Vive and now I want to put game play as a world video and that game play is separate screen recording video
@user-c563fc ah, I understand. But you did not record it via the Screencast feature, did you? If not, you will have to generate timestamps for the externally recorded screen recording
Also, you will have to figure out the proper intrinsics to use for the virtual camera from which the scene was filmed
Sorry about it.
I am thinking of best way the system will tell the participant to perform different tasks (all in one video). When to start and stop doing each of the task
I hope it's clear now sir
@user-7daa32 I still struggle with understanding of what you actually need, to be honest. I feel like I cannot help you properly because of that.
Is it general advice on how to perform the experiment with the subject? Or are you missing specific functionality within Pupil Capture? Or is it something in addition to Capture, that you need?
Is it general advice on how to perform the experiment with the subject? Or are you missing specific functionality within Pupil Capture? Or is it something in addition to Capture, that you need? @papr
Sorry about it.
I am thinking of best way the system will tell the participant to perform different tasks (all in one video). When to start and stop doing each of the task
I hope it's clear now sir
@user-7daa32 So, you want to automate the experiment instruction without programming, do I understand correctly? Have you had a look at https://www.psychopy.org/ before? It provides a graphical user interface for experiments, including playing sounds and displaying instructions and stimuli. You can use it without programmatically integrating Pupil Capture into it (in other words: without using the network api).
Keep in mind, the less you program, the more you will have to do manually after-the-effect. 🙂 Or in reverse: The more you program, the more time you will save when processing the recordings.
Hi its me again, can you explain me how the Pupil decides what is fixation?
@user-467cb9 This looks like the camera drivers for the eye cameras were not installed correctly. Which version of Pupil Capture and which hardware configuration do you use? @papr Hi there, I' m using last version Pupil Caprure 2.4.0 from your website
@user-467cb9 This looks like the camera drivers for the eye cameras were not installed correctly. Which version of Pupil Capture and which hardware configuration do you use? @papr what do you mean "hardware configuration"?
@user-b292f7 See the documentation https://docs.pupil-labs.com/core/terminology/#fixations
@user-467cb9 Do you use a Pupil Core headset? With how many eye cameras (Monocular vs binocular setup)? What generation are they (120Hz vs 200Hz)?
@user-7daa32 So, you want to automate the experiment instruction without programming, do I understand correctly? Have you had a look at https://www.psychopy.org/ before? It provides a graphical user interface for experiments, including playing sounds and displaying instructions and stimuli. You can use it without programmatically integrating Pupil Capture into it (in other words: without using the network api).
Keep in mind, the less you program, the more you will have to do manually after-the-effect. 🙂 Or in reverse: The more you program, the more time you will save when processing the recordings. @papr Thanks. I will look at it
@user-467cb9 Do you use a Pupil Core headset? With how many eye cameras (Monocular vs binocular setup)? What generation are they (120Hz vs 200Hz)? @papr yes i use Pupil Core headset. Monocular both. I have two headsets. How can I check the generation of cameras?
@user-467cb9 Oh, these are the very old ones. I do not think that Capture ships auto-install drivers for these. You definitively will have to install the drivers manually. Please follow steps 1-7 from these instructions. The eye cameras will appear as "integrated camera" if I remember correctly
Yes, integrated camera
So I start check it
hi all, I wanted to ask if its possible to get access to an 3D Model of the Pupil Core Glasses for Usage in Unity (so any unity compatible 3DModel format), since we want to model the glasses in virtual reality. Is there already an model that I can get my hands on? Would really appreciate it :). Thanks!
@user-6e3d0f These are the only models that we provide officially: https://github.com/pupil-labs/pupil-geometry
Ah thanks I didnt saw that
and given the readme I can build the 3dmodel from those CAD files right?
@user-6e3d0f Check out the github preview of the files. I think they might not what you are looking for. They are meant as an interface documentation for hardware addons.
Ah yes, thats what I thought now too when looking at the files. I want an 3D Model of the glasses itself so I can put them on an Avatar in Virtual Reality. Like the idea is to just import a "ready" model of the Pupil Core (https://pupil-labs.com/products/core/) glasses and use them in unity. Do you get what I mean?
@user-6e3d0f I think so but I do not know if I can help you with that. Try contacting info@pupil-labs.com in this regard.
I'll drop them an email. Anyways thanks for your help 🙂
@papr Hi, my last question has been solved. Now, I have two other eye IR cameras, which is different from the Pupil Labs. How can I measure the focal length of eye IR cameras to replace the value in https://github.com/pupil-labs/pupil-detectors/blob/master/src/pupil_detectors/detector_3d/detector_3d.pyx#L47-L48 . I also found the focal length affects the z-coordinate of 3D eye ball centers or pupil centers. But I don't know how to measure the focal length of my eye IR cameras.
@user-1ccccf if the IR cameras are compatible with Pupil Capture, you could select them as world camera and run the camera intrinsics estimation plugin. Be aware that you might need to print the circle pattern in such a way that it is visible to the IR camera.
@papr Yes, but I found the IR camera may not capture the image of computer screen. I wonder how the focal length of integrated camera
of Pupil Labs was measured.
@user-1ccccf Correct, that is why suggest trying to print the pattern. Some printers are able to print it such that it becomes visible.
@papr OK, I understand. Thank you very much. Btw, what has it any specific requirements for the printer? Can you provide more information?
@user-1ccccf Basically, you need paper that reflects IR light and color that does not.
@papr OK, thank you very much.
Is there any way to get scene camera video on LabStreamingLayer? Our previous workaround (before using LSL) was to send a ZMQ message to start record on PupilCapture at the same time as we start to record on our acq software. Which is not robust at all, and requires some cleanup work (acq software data & pupil video aren't saved on the same folder, etc.)
@user-ee7c3e Not using the current implementation. You basically would need to extend the pupil_capture_lsl_relay.py
with an other outlet that is responsible for streaming the video via https://github.com/sccn/xdf/wiki/Video-Compressed-Meta-Data
https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/pupil_capture_lsl_relay.py#L47
events
is a dictionary containing the optional key frame
. Its value is an instance of the pyuvc class Frame
https://github.com/pupil-labs/pyuvc/blob/master/uvc.pyx#L82 You would need to extract the codec and raw data and push it to the video outlet
@papr Thanks, that's what I started looking at, but I couldn't find the keys to the events
variable, so thanks for the tip 😉 I don't think I'll go through with all that, but if I do I'll make sure to make a PR
Related question: is the scene camera accessible via opencv? If so, could I use something like VideoAcq (https://bitbucket.org/neatlabs/videoacq/) to record it?
@user-ee7c3e If the camera is being used by Pupil Capture, it is usually not possible to access it at the same time using a different software. I think it is possible to access the cameras using OpenCV in general but I do not know if you have the same control over it as if you would have when using Pupil Capture.
Hello - I downloaded the new Pupil apps for mac, and when I open old recordings in Player, the calibration, fixation detection, etc. fails because there is "no gaze data available to find fixations". Is there a fix for this? Hi all! Receiving this error in same condition: recording on 1.2.3 Pupil Mobile, viewing in 2.4 Player on Mac. Is there any way to fix it?
@user-56be96 Have you turned on offline pupil detection and run offline calibration successfully?
@user-908b50 let me check out the recordings first. I will let you know. @papr just sent you 3 recording folders! I am now having difficulties with both version 2.4.3 and 1.11. With the exception of one folder, the other 3 I have tried so far don't recognize 1.11. As a reminder, we collected data using v1.11.
@user-908b50 I won't be able to have a look at this today. I will come back to you as soon as I had a look.
@papr alright, please let me know as soon as you are able to. Thanks!
let me know if you need raw files. The ones I sent are processed ones.
Hey, trying to figure out how to import the pupil size .pldata files into a jupyter notebook
This load_pl_datafile yields an error
NameError: name 'Serialized_Dict' is not defined
anyone know solution? just want to access the pupil size time series
@user-4ddeb2 try to use this version of the function: https://gist.github.com/papr/81163ada21e29469133bd5202de6893e
Got it, thanks!
The timeformat on these timestamps? If I run the datetime.fromtimestamp on them I get 1970 dates
347345.211961 example
@user-4ddeb2 check our tutorial onhow to convert the timestamps to datetimes https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb
Hi! I'm new to Pupil and am thinking of getting the Pupil Core with USB-C mount to allow for prototyping at a later stage in my project. However, initially I just want to get a feel for the Core and its capabilities as originally designed. What scene cameras/sensors are best supported and/or most commonly used for the Pupil Core with USB-C mount?
I am using the core headset to detect the diameter_3d of the pupil. During the same recording, the model_id changed a few times. I
That give me a real trouble. Can I control it to use the same model_id during the whole recording? Of course I will try to control the eye of the subject as much as we can
@user-c1bc31 Hi 👋 yes, you can 'freeze' the model. Go to each eye window in Pupil Capture and click "Freeze Model".
thank. I will try it.
For the diameter_3d, sometimes I got measurement like 3.05E-07mm. How come the value is extremely small? pupil size should be around 2mm to 10mm
Hey everyone, For a project I need to run the software on a NVIDIA Xavier. Does anyone of you have any experience with installing the dependencies on ARM? Even though i could install everything (dispite of pupil-apriltags) I am not able to launch pupil capture or service. Thanks in advance!
@user-da7dca Hey 👋 What's the error that you get when you start Capture?
30 ../sysdeps/unix/sysv/linux/waitpid.c: No such file or directory.
@user-c1bc31 Given that you fit your eye model correctly, it is still possible that bad 2d pupil detection can cause bad 3d estimations. We recommend removing low confidence data to avoid this. As an additional post-processing step, you could remove data points that exceed your expected diameter bounds.
@user-da7dca Is this everything? I am not sure if this is an issue with cysignals or if cysignals is not able to properly indicate what causes the issue.
this is the full dump i get when launching core:
Looks like libglfw
has an issue when calling strcmp()
. This is a third-party dependency that we use for creating and managing windows for our UI. @user-878608 The issue might indeed be related to the used arch (aarch64). Theya re using a NVIDIA Xavier (see https://discord.com/channels/285728493612957698/285728493612957698/765132039229538314)
hey may i ask what device you are using? it seems like aarch64 isnt very compatible?
@user-da7dca unfortunately, I am not able to extract any helpful information from that 😕 How did you install cysignals?
@user-da7dca unfortunately, I am not able to extract any helpful information from that 😕 How did you install cysignals? @papr i could install it via pip
Hi ) My headset is very unstable. And even when it works it's best it is still lying . After calliobration it is still extremely not accurate in point of view recognition. Is there any app which may correct that, please? Otherwise it's just useless thing unfortunately.. For example - person looks on object 1mx0.8m from distance 2m after callibration, it lies regarding view location inside object confines thanks
@user-5e36de Hey 👋
My headset is very unstable. And even when it works it's best it is still lying Are you talking about the video connection? Does the video freeze when you move the headset?
After calliobration it is still extremely not accurate in point of view @user-5e36de Please start a recording (hitting
R
) before calibrating, run a calibration, and share the recording with [email removed] such that we can give more precise feedback.
@user-da7dca ok. It is possible that there is a different module causing the segmentation fault but I am not able to tell which one. What is your issue with installing AprilTags?
when building the wheel via pip i get an error that i have a not supported version of cmake even though i tried several supported versions by building them from source. Since I dont need april tags in my setup i bypass this problem by removing the imports in pupil_src
@user-da7dca ok, understood. Sounds like a good solution. Based on the output, it looks like the issue does not happen on import but usage of the problematic module. My approach to debugging this would be to place several print-statements across service.py and try to narrow down the location of the crash.
alright thank you very much. will try this approach and come back with my findings 👍
@user-5e36de Please start a recording (hitting
R
) before calibrating, run a calibration, and share the recording with [email removed] such that we can give more precise feedback. @papr Hey Ok I will try it again, with Rec, than will write to you and to [email removed] thank you.
Hey everyone, for a project I downloaded and installed the latest core software version (2.4) on a Windows 10 machine. Pupil Player and Pupil Service run fine. However, when running Pupil Capture I get the error in the console as shown in the attached picture. The world window pops up but stays blank, saying "Not Responding". I have tried to run pupil_capture.exe as admin, but the issue remains. I have looked up the error in Google but could not find any helpful information so far. If anyone has an idea how to solve this issue I would love to hear about it. Any help is appreciated. Thanks in advance
@user-821b71 It looks like pyre (a network library that we use for various features in Pupil) is having trouble retrieving ~~its~~ an interface's unicast address. Unfortunately, the error message does not tell which interface is causing the issue. What network interfaces do you have on your computer?
@papr Thanks for your reply. My computer has the following network interfaces: Ethernet Adapater, WiFi Adapter (currently connected), Bluetooth and Virtual Private Network Adapter.
@user-821b71 I have the suspicion that either the BT or the VPN is causing the issue. Can you disable the disable the interfaces one by one and check every time Capture/Service starts correctly?
@papr Thanks for the advice. I will try disabling the interfaces tomorrow as it is a working PC and I am already home. I will keep you posted. Thanks anyway 👍
@user-821b71 Have a nice evening
Hey. Can't adjust the camera for my eyes. The program opens two separate windows for the right and left cameras, but the image is displayed from the front camera. How to fix it?
@user-8f5c75 There should be three windows in total, can you confirm this?
Each window should be previewing one of the cameras (assuming a binocular (two eye cameras) headset)
only three windows, but the image in them from the front camera
@user-8f5c75 Do I understand correctly, that they are all displaying the same video? I am asking because I have not seen this issue before.
Yes
@user-8f5c75 Could you share a picture of the headset for reference?
@user-8f5c75 Which version of Pupil Capture are you running?
2.3, 2.4
@user-8f5c75 ok, great. Thank you for the information. 🙂 In one of the menus, please open the Video Source menu and enable "Enable Manual Camera Selection". Could you make a screenshot of the contents of the "Activate Camera" drop down menu?
@user-8f5c75 It looks like that either (1) the driver for your eye camera was not correctly installed or (2) there is a connection issue with your eye cameras. Also, given the reference picture above, there should be at least one additional entry.
Did the eye cameras work before or is this the first time using Pupil Capture?
This is my first time using Pupil Capture
does this mean that the driver is installed on only one camera
@user-8f5c75 That is correct. Could you open the "Cameras" and "Imaging devices" sections? Are there any "Pupil Cam" entries?
@papr There is no Pupil Cam in these sections
@user-8f5c75 ok, thank you. Please contact [email removed] in this regard with your order number and this link https://discordapp.com/channels/285728493612957698/285728493612957698/765247925483339817
@papr Thank you very much, you helped me a lot!
Greetings!! I scanned the recent notes on CORE but am not finding info about my troubles. Our Player stopped working and so I returned to Download the software. The download is only in the RAR format and apparently they want to be paid for unzipping. Is there another method to obtain my updates? I heard that individual use is free, but can't seem to locate how to get it. Any advice is appreciate! I love using this tool!
@user-2c338d At the bottom of the release notes, there is a link to WinRAR. You do not need to buy it. Simply click the blue download button to get it for free
@user-2c338d if you don't feel comfortable using WinRAR without eventually buying it, you can install 7-zip, a free program that does the same thing
@user-6bc565 @user-2c338d I can confirm that 7-zip works as well.
@user-da7dca ok, understood. Sounds like a good solution. Based on the output, it looks like the issue does not happen on import but usage of the problematic module. My approach to debugging this would be to place several print-statements across service.py and try to narrow down the location of the crash. @papr ok i think i found the line where the program is crashing: In service.py line 213. However i have no idea whats wrong :/
@user-da7dca This is where all the plugins are initialized. Can you run Service with the --debug
flag? This should give you more output. Specifically, which plugin is loaded at that time. If it does not work for you replace this logger.debug
with a print statement https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/plugin.py#L382
ok i just used the --degug flag. I found out that it it failed to load the fixation plugin however i dont think that is the real source of my error since it failed to display any interface
@user-da7dca try the print statement please. the logger output might be delayed
This is what i got
@user-da7dca ok, nice. We are getting closer. Could you check if any of these import statements causes the crash? https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/service_ui.py#L12-L21
nope seems to be fine
@user-da7dca How did you install glfw?
from source via github
@user-da7dca Also, my bad. We knew the imports would be fine as the error only appears after initializing Service_UI. Please check these line https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/service_ui.py#L38-L177
ok ine service_ui.py line 60 seems to be the point where it crashes
Hello, I asked previously about how to get my coordinates into visual degrees and I was sent this https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L140-L148 which is helpful, but I'm not the strongest programmer. I tried tracing this line: vectors = capture.intrinsics.unprojectPoints(locations) to figure out how you read the camera intrinsics file and where the unprojectPoints function is, but I couldn't find it.
@user-8b7bfd This is where the required class is defined: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py
from camera_models import Camera_Model
camera = Camera_Model.from_file(
directory_path_with_intrinsics_file,
"world", # given worlds.intrinsics file
# depends on your world video resolution:
camera_resolution,
)
camera.unprojectPoints(locations)
thank you
i just noticed that i reported the wrong line since i already added some print statements. The actual source for the error is in line 55 of service_ui.py (main_window = glfw.glfwCreateWindow(*window_size, "Pupil Service")). However i dont know how solve it. I already built multiple versions of gflw with no problem at all :/
hey i am facing the same issue as you, did you maange to solve this issue?
@user-da7dca Please uninstall libglfw, and try running the develop
branch. We switched from our own glfw bindgins to https://pypi.org/project/glfw/
Installing it via pip should provide you with a correctly built version of libglfw. (hopefully)
i tried it out with the glfw pip package and develop branch with following results:
Hi, I have been trying to use the surface tracker plugin of the latest sourced release (through remote desktop), but the GUI freezes a lot more than before. Any suggestions?
@user-908b50 which branch are you using?
@papr 2.4.9 (source)
@user-908b50 Please use the surface_tracker_fixes
branch instead
@papr so I revert back to the old 2.4.3 version and only update the surface tracker fixes?
@user-908b50 No, you should be able to check it out via git: git checkout surface_tracker_fixes
unless you have made personal modifications.
You might need to run git fetch
first
Alright, let me see. I assumed updating it all would update surface tracker as well.
@user-908b50 I am not sure what you mean by "updating"
I updated my own fork. And then pulled those changes locally.
for pupil-labs/pupil
Ah, understood. I guess you are pulling from master. Have you made any special modifications to your fork?
Not to pupil directly. My repository does contain other processing scripts that I have been working on.
ok. What is the output of git remote -v
for you?
origin https://github.com/fiza09/pupil (fetch) origin https://github.com/fiza09/pupil (push) upstream https://github.com/pupil-labs/pupil.git (fetch) upstream https://github.com/pupil-labs/pupil.git (push)
hi
I have a pupil w120 e200b
I am trying to calibrate with pupil caputure
but alwasy fail due to not enough pupil data
any information on how to fix it?
@user-908b50 try git merge upstream/surface_tracker_fixes
. This will merge the surface tracker fixes into your fork.
@user-908b50 Should you get an error about glfwInit when running Pupil, you will need to install glfw via pip install glfw
@user-3ec552 Please make sure to adjust the eye cameras correctly https://docs.pupil-labs.com/core/#_3-check-pupil-detection
is it okay to wear a corrective lens?
@user-3ec552 contact lenses are ok, glasses are problematic if they occlude the pupil
i am wearing glasses
but it is not occluding the pupil
can still see pupil in the image
@user-3ec552 Could you share a screenshot of your eye windows with us?
@user-3ec552 2d detection looks good. your 3e eye model does not look fit well enough. Try rolling your eyes until the green outline is as big as your eyeball (maybe slightly smaller) and the blue and red circles overlap as well as possible for all eye positions.
seems okay now
what does C T R mean on the left of the panel?
Three different types of calibration?
These are buttons. C is for calibration, T for testing / validation, and R is for recording.
@user-3ec552 I can generally recommend having a look at our Getting Started guide [1] and our Best Practices [2] if you have not done so yet.
[1] https://docs.pupil-labs.com/core/ [2] https://docs.pupil-labs.com/core/best-practices/
thanks
@user-908b50 Do you have an update for me on how the changes are working for you?
Hi, I'm a researcher from KAIST. I have a question about Blink detection from Pupil capture: how is the blink confidence calculated? Seems they are not the average of pupil detection confidence; the user guide (https://docs.pupil-labs.com/core/software/pupil-capture/#blink-detection) does not explain details about them. Where can I find more information about blink confidence?
Hi, I am having issues getting started. The world camera works, but both eye cameras don't supply any images. Tried the troubleshooting already (uninstall in the device manager). Did not help. Thanks for your support.
@user-e94c74 The blink detector convolves a step filter with the confidence signal. The resulting signal (green line in Player timeline) peaks when there are very sharp drops or increases in the original confidence signal. When these peaks exceed the thresholds (yellow lines in Player timeline), and on/offset is being detected. On- and offsets are aggregated to blinks.
@user-e94c74 You can find the implementation here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/blink_detection.py#L324
@user-4b966e Do I assume correctly that you are using a binocular 200Hz Pupil Core headset? Could you check the device manager and let me know how many "Pupil Cam" entries you can find? And in which category they are listed?
@papr Thanks for the quick reply. No, it is a 120 Hz device, just looked it up on the receipt. In my device manager, I used to see 3 Pupil Cams, 1 active and 2 hidden cameras. Now, after having done the troubleshooting, I see only twice the "Pupil Cam1 ID0" (once active and once hidden) under "libusbK USB Devices"
@papr Also tried on a second PC, same there
@papr but I do see ID1 twice and ID2 once (all entries hidden) below "Cameras"
@user-4b966e Mmh, ok. Please contact info@pupil-labs.com in this regard.
Hi there, I was wondering with the 3d sphere x, y, z and circle x, y z positions what orientation is the x, y, and z axes (relative to the camera or to the headest) I am trying to match it with a world space (which I have from 3d motion capture and can track the headset and camera)
@user-074809 pupil data is relative to eye cameras, gaze data is relative to the scene camera.
For reference, in our head-pose tracker we assume the scene camera position to be the position of the head https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking
@papr ok, sorry, i mean to orient myself, is the x access forward back, y access left right, and z access up down? I am specifically looking at the xyz in the 3d model pupil detection. not the gaze location as mapped on the scene camera video
@user-074809 we use the opencv 2d and 3d coordinate systems https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
@papr perfect! thank you!
You're welcome!
Hi all,
I believe i'm having an issue the Pupil LSL Relay, i can capture data from my EEG device and markers from Unity which all look correct which hints to the LSL working correctly, though i'm able to see the pupil_capture stream appear in either a pylsl script or through analysing the xdf file created via LabRecorder (https://github.com/labstreaminglayer/App-LabRecorder), the data entry fields are filled with zeroes.
I've now tried to read in the pupil capture data via the pupil-helps lsl_inlet.py script (https://github.com/pupil-labs/pupil-helpers/blob/master/LabStreamingLayer/lsl_inlet.py) which yielded no samples: C:\Users\Liam\Documents\python\kivy>lsl_inlet.py INFO:main:Looking for Pupil Capture streams... INFO:main:Connecting to Liam-PC INFO:main:Recording at lsl-recording.csv DEBUG:main:0 samples written DEBUG:main:0 samples written DEBUG:main:0 samples written DEBUG:main:0 samples written DEBUG:main:0 samples written DEBUG:main:0 samples written DEBUG:main:0 samples written
Again the pupil_capture LSL stream has been created and can be seen by the script though no data is produced. Has the LSL relay plugin been updated for the new version of pupil capture?
@user-020426 starting with Capture v2.0, you need to calibrate first before gaze data is being generated. And if I remember correctly, the lsl data is based on gaze data.
@papr If i'm solely using the VR/AR addon cameras in the Vive am i still able to run a calibration? Or can hmd-eyes work reliably with the LSL?
@user-020426 the current hmd-eyes version works independently of the lsl relay. The latter should be started before connecting though as it adjusts the Pupil clock
Use hmd-eyes to calibrate, afterward you should see data being published to lsl
@papr thanks for the quick response, i'll have another go when i've got some time over the weekend and drop back in here in case i come across any issues.
Hello again! I was wondering, what's the difference between the blue circle + dot and the red circle + dot? I'm under the impression that the blue one is the result of the 2d pupil detector plugin. What might the red one be?
@user-3cff0d Blue is the result of the 2d detector, correct. Red is the 3d detector result. You can use them as an indicator for how well your model is fit. Both, blue and red, ellipses should overlay as much as possible in all possible angles. It looks like there is room for improvement in this particular case as the eye model outline (green) is much larger than the eye ball. When well fit, it should be slightly smaller than the real eye ball in the image. We recommend rolling your eyes to get good fitting.
Thanks! I'm actually running a forked version of the software with a neural network implemented into the 2d pupil detection but not the 3d pupil detection, thus the discrepancy (and the massive cpu usage)
@user-3cff0d Ah nice. We are working on supporting user pupil detection plugins. This should allow you to separate your implementation from the application and get rid of the fork. This is planned for the v2.6 release. 👍
Oh, great! That's very exciting!
@user-3cff0d Do you have a reference on the NN that you are using? Or are you still working on it?
I'm working with a research team at RIT to integrate the NN RITnet. However I'm unsure if I should be sharing the specifics just yet- I can get back to you after making sure
@user-3cff0d No worries. No need to share anything that is not ready for it. 🙂 Let us know once you have published your results/work. Keep your eyes on https://github.com/pupil-labs/pupil/releases regarding the mentioned changes.
Is there an ETA for the v2.6 release?
@user-3cff0d eta for v2.5 is next week. Then if everything works well 2 weeks after. I can let you know once we have a first implementation. You seem to work from source already so you might be a great beta tester!
@user-3cff0d Where do you use your implementation primarily? Capture/Service or Player?
At the moment I'm using more of the Service than anything else, since I haven't yet been fiddling a whole lot with gaze detection
@user-3cff0d ok, good to know. The initial step is to support these plugins for real time use. Afterward, we will extend the support to Player.
Gotcha, that makes sense. I look forward to that!
what does blue circle and red circle represent (around the pupil)? for 3d model, pupil size detection, it use red circle or blue circle to calculate the pupil size in mm? From my experience, blue circle is more stable than red circle. However, the size calculation seem to be based on red circle. I can send you my recording folder if you are interested.
@user-c1bc31 Please see my response above https://discord.com/channels/285728493612957698/285728493612957698/766385521291034636
Please help me, here is my problem: I am using pupil eye tracker hardware which writes out two files of interest: pupil_position.csv and pupil_timestamps.csv. Within pupil_positions.csv there is a column titled 'pupil_timestamps'.
The problem is: pupil_timestamps.csv does not have the same values as the column 'pupil_timestamps'.
Can anyone explain to me why they are different
@user-563b04 I am not sure where the pupil_timestamps.csv
file would come from. Can you confirm the file name?
@papr The original filename was pupil_timestamps.npy, sorry. I converted to .csv to make it readable in matlab. So to restate my original problem: " I am using pupil eye tracker hardware which writes out two files of interest: pupil_position.csv and pupil_timestamps.npy. Within pupil_positions.csv there is a column titled 'pupil_timestamps'.
The problem is: pupil_timestamps.npy does not have the same values as the column 'pupil_timestamps'. Can anyone explain to me why they are different"
@user-563b04 First of all, the exported pupil_positions.csv
only includes data from the export range, set by the trim marks. Therefore, it is possible that it contains less values than the pupil_timestamps.npy/csv
file. Additionally, comparing floating-point numbers is always tricky, especially if one side is read from a text-based file. Use appropriate comparison functions instead of ==
numpy.isclose()
: https://numpy.org/doc/stable/reference/generated/numpy.isclose.html
Matlab's ismembertol
: https://www.mathworks.com/help/matlab/ref/ismembertol.html
@papr pupil_positions.csv has a 'timestamp value' of approx 4.5e5 whereas the pupil_timestamps.npy file has a 'timestamp value' of approx 3.5e5. We export the entire length, so not sure why trim marks is here. The most important quetsion I am wondering is what time format does each use: pupil time or system time
@user-563b04 pupil software always exports pupil time. If you want you can share the recording with data@pupil-labs.com for us to review
@papr ok what about sample rates? what sample rate does it export at?
@user-563b04 it exports the active pupil data within the export range. From your description it looks like the intermediate data has an offset compared to the export. I do not see any reason for that. The easiest way for me to find the cause for the issue would be to reproduce it using the recording.
There is no fixed export sampling rate.
@user-563b04 are you running pupil from recording or Post-hoc pupil detection?
@papr we are running pupil player and drag and dropping a recording into the window and pressing 'e' for export. That is what generates pupil_positions.csv
@papr On the left, this is the pupil timestamps column header I mentioned in pupil_positions.csv... On the right, this is the pupil_timestamps.npy in a csv file. Notice that the offsets are different AND the sequential increase is also different. Curiously, they are the same length
@user-563b04 interestingly, the Pupil timestamps on the right are duplicated. While on the left, each timestamp appears four times. How was the recording made. Did you use any type time sync? How exactly did you convert the npy file to csv.
@papr We did not use a time sync i believe. We made the recording using pupil capture. TO convert the npy file I just read it using python and wrote to a csv file
I really can only guess what is going wrong here. Please share the original recording with data@pupil-labs.com so I can have a look. You do not need to share the video files if you do not want to. I am only interested in the remaining raw data. Without it it is too difficult to tell.
@papr I have just sent it to you
Thank you! I will come back to you via email once I have reviewed the email.
It seemed that the detection of pupil size is more stable in 2d model (blue circle) compared to 3d model (red circle under the condition of sudden changes of light intensity in the environment. We are recording eye pupil response to a flashlight (used in photography). Can the 2d model measure the pupil size in the unit of mm?
Hello again! I was wondering, what's the difference between the blue circle + dot and the red circle + dot? I'm under the impression that the blue one is the result of the 2d pupil detector plugin. What might the red one be? @user-3cff0d very nice high resolution image of the eye. Do you use different cameras?
Hello everybody, I'm currently writing a Python script to read the data in real time via API, which works so far. The x and y coordinates are read out via the norm_pos function and output via the print function. I have shown the data in a plot for x and y directions. However, if the eye position changes, I cannot make an exact statement because the measured values are extremely scattered. Does anyone have an idea why this could be or how the values are to be interpreted? Greetings Dominik
Plot is from x-axis at different Eye Positions
@user-a6e660 Are you plotting pupil or gaze data? Also, do you split the data by eye id? Pupil data is relative to the eye camera and is therefore not comparable between eyes.
@user-a6e660 Checkout our notebook tutorial on loading and visualizing pupil data. Specifically the "Plot Pupil Positions" section might be of interest to you https://github.com/pupil-labs/pupil-tutorials/blob/master/01_load_exported_data_and_visualize_pupillometry.ipynb
Hey everyone, I have a problem with the offset of my data compared to the really observed objects, and I don’t know if it’s possible to correct this error a posteriori, after recording. Could anyone help me?
The idea is to ask a participant to look at a map for about 2 minutes, and then to analyze what he looked at, how long, etc. After recording, we built a heatmap with QGIS software, from the raw data. But we clearly see that there is a shift between what was stared at and what is shawn on the heatmap. I can send the heatmap we built. For example, after discussing with the participant, he told us he looked at the top right part of the map, but the hot spots are more on the left. Is there any possibility to correct the raw data? (I can send you the raw data file we used). I precise that we have the same problems using Pupil Player for the heatmap, which means that it comes from the data and not the software.
And my second question is what does the file “surf_Positions_surfaceN “ represent ? Can it be helpful to resolve my problem?
Thank you for your answers and have a nice day !
@user-b259f6 If you recorded the calibration sequence, you can apply a manual offset correction in the "post-hoc calibration" of Pupil Player. https://docs.pupil-labs.com/core/software/pupil-player/#gaze-data-and-post-hoc-calibration
Alternatively, you can use this user plugin to apply a manual offset: https://gist.github.com/papr/d3ec18dd40899353bb52b506e3cfb433
See https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin on how to add user plugins.
Thank you for your so quick answer ! I'll look at this !
@papr Thank you very much for your answer. It works now!
hi @papr , I have shared the raw data files with you on google drive!
@user-14d189 Hi, yes I'm fairly certain they're different cameras however I am not the one who took that particular video so I'm not sure exactly what was used
hello, I want to be able to run Pupil Core from source code. I have completed Installing Dependencies and Clone the Repo, but there are some problem when we run Pupil.
The building in the command prompt, activate the virtual environment, and into the path 'pupil_src', then enter 'run_capture. Bat'. An error occurred as shown in the figure. The last line says: PermissionError: [WinError 5] Access Denied
Does anybody know what the mistake is?Thank you very much.😁 @papr
@user-594d92 you need to update the packaging module
@papr Thank you. I'll have a try
Hi again, may I ask which module I should update? is this?
pip install packaging -U
The module that needs updating is called packaging
. 🙂
ohh, i know, thanks !😅
Hi. Just want to ask if this software have and SDK that we can use to implement it on our android app?
Can someone tell me, how the pupil diameter value is being calculated in the graph?
Because in exported raw data I see the values like 43,91654965, 42,3645332 , 52,24551901 and so on
@user-772ef5 The Pupil Core network API requires a computer running Pupil Capture. You could remote control and receive data from Pupil Capture on your phone. But this will likely not fulfill your use case.
@user-c563fc The timeline data does not show outliers that were removed using Tukeys fences. https://github.com/pupil-labs/pupil/blob/70e93bcb81073f767173087b4137a5078835a282/pupil_src/shared_modules/pupil_producers.py#L204-L212
Hi. I want to run the source code. But what does this error mean? Thank you very much
https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-windows.md I follow the procedure of the website. The 'Start Pupil' at the bottom. I entered "run_capt.bat" ,"run_player.bat" and "run_service.bat" from the command prompt, respectively. @papr 😀
@user-594d92 it looks there are version mismatches between your python installation and some of the installed modules
The bat files assume that you followed the instructions. It looks like you installed the Pupil dependencies in an Anaconda environment. Not if the bat file can handle that correctly
What is the output of python - V
for you?
@papr This looks a little messy...😅
and...
@user-594d92 I meant upper case V
but that is ok.
I was interested in the version number which is included in the second screenshot.
This is the Python version of my virtual environment.
@user-594d92 Could you try starting python -v
(lowercase v
) again. And afterward call import numpy
and import pyglui.ui
? Please copy the output to a text-sharing service like https://gist.github.com/
ok, I'll just do it. 😁
I have uploaded it to this website. Thank you very much. @papr
@user-594d92 Could you share the link to the document with us?
ohh...😅 https://gist.github.com/mimic777/d8ac3fc0fcac5914120fea62a7f05e0a/revisions
@user-594d92 What is your output for python -c "import numpy; print(numpy.__version__)"
?
This comment suggest that upgrading numpy solves the issue https://github.com/scikit-learn-contrib/hdbscan/issues/272#issuecomment-453958532
ok,I will have a try. thank you.
For error handling, does anyone know if there is a recommended approach for checking if Pupil Capture is running or if Pupil Core is plugged in? For example, when trying to connect with ZMQ, raise an error message instead of having the script just hang.
@user-430fc1 If I remember correctly, socket.connect()
does not block, correct?
import zmq
ctx = zmq.Context()
socket = zmq.Socket(ctx, zmq.REQ)
socket.connect('tcp://127.0.0.1:50020')
socket.send_string("t") # any command is fine here
# https://pyzmq.readthedocs.io/en/latest/api/zmq.html#zmq.Socket.poll
if socket.poll(1000) == 0:
raise RuntimeError("Pupil application not reachable")
socket.recv_string() # result can be discarded, but recv_string() must be called!
If I want to 3D print my own headset, are the files available for that?
Is there anyway to check what Eye Camera my Core Glasses have? I only got them from the university but from the manual there is focus 120Hz and NoFocus 200 Hz Eye Cameras and they look pretty much the same in the manual
@user-6e3d0f You can share a picture of them with us if you want. I know them pretty well.
@user-666fd7 Check out https://docs.pupil-labs.com/core/diy/#getting-all-the-parts
@user-c563fc The timeline data does not show outliers that were removed using Tukeys fences. https://github.com/pupil-labs/pupil/blob/70e93bcb81073f767173087b4137a5078835a282/pupil_src/shared_modules/pupil_producers.py#L204-L212 @papr I am not sure exaclty what do you mean. Graph shows a range between 5.1-7.0mm for pupil diameter and pupil diameter values are like this 43,91654965, 42,3645332 , 52,24551901. So, the value range value is should be the minimum diameter value, but I cannot figure out the formula to convert values like 43,91654965, 42,3645332 , 52,24551901 to 5.1mm
@papr
@user-6e3d0f that is a 200hz camera
@user-c563fc the graph does not display all values. Therefore, the lowest displayed value is not the minimum value. The formula is implemented in the linked source code. There is also a link in the comment that explains the procedure. https://en.wikipedia.org/wiki/Outlier#Tukey's_fences
Once pye3d is installed and being loaded with the text "Using refraction corrected 3d pupil detector." in the command window, how do I switch to using that plugin instead of the default 3d plugin?
Or would that require modifying the code
@user-3cff0d You are using pye3d instead of the old 3d detector already then
Check the detector menu title in the eye window
It just says "Pupil Detector 3D" at the moment
Here's a screenshot of the actual command window where I see that message
@user-3cff0d Please try Restarting with default settings from the general settings.
Yep, that fixed it. 🤦♂️
Thanks!
Hello
I have just been trying out the new changes since I got back to town. I got your email and have updated my local repository to 2.5. My program freezes a lot (see pic). Is there a way to work around it. I have also pip installed glfw as per newer release.
@user-908b50 Does the bundle release or the source version freeze? Or both?
@papr both do! Source release freezes more. Actually, I have downloaded the bundle release too. But for some reason it take me to v1.11 (which is also on my pc) and not v2.5.
But for some reason it take me to v1.11 (which is also on my pc) and not v2.5. Source or bundle?
Bundle does that
The above pic is a screen capture of the source version
Then it looks like the bundle is not correctly installed. Please use sudo dpkg -i <path to deb file>
to install the deb files.
@user-908b50 When running from source and it freezes, does it continue after a while or do you see a traceback/error in the log messages?
Is there a way to solve the errors with source? I like seeing the error messages pop up when working with source and I enjoy working with more transparency.
@user-908b50 When running from source and it freezes, does it continue after a while or do you see a traceback/error in the log messages? @papr it continues after a long long while. Let me share my messages/
The thing is, that I do not exactly know what your modifications are. Therefore, I would like to make sure the issue is also present in the bundle to ensure that the issue is on our side. 🙂
ahh okay, yes let me correctly install the bundle again!
Here is my terminal output
@user-908b50 That is an issue with your opencv installation.
You need to build opencv with tbb support or else the background legacy marker detection crashes.
This issue should not be present in the bundle.
Note: Installing opencv via pip or anaconda does not include tbb support.
Also, you might want update your fork to upstream master
now that we have released. This should also include the v2.5
tag. 👍
@papr good to know why that happens (although it only happens for some recordings). I will take a look at opencv. Its frustrating to know that i wasn't installed correctly in the first place. I followed these instructions for opencv: https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-ubuntu18.md. So, I actually installed it using sudo. Lets see if there is a way to edit current installation to include tbb support.
@user-908b50 Well, the this document does not tell you to use Anaconda which you use in your setup. 🙂
But no worries, we can check if your anaconda environment is using the correct opencv version.
Also, you might want update your fork to
upstream master
now that we have released. This should also include thev2.5
tag. 👍 @papr okay, so an correct merge should mean the application version must be 2.5.
@user-908b50 Well, the this document does not tell you to use Anaconda which you use in your setup. 🙂 @papr that's right!
@papr so an correct merge should mean the application version must be 2.5. @user-908b50 Correct, if the git tags are pulled correctly.
What is your output for conda list | grep cv
when executed in your pupil anaconda environment?
What is your output for
conda list | grep cv
when executed in your pupil anaconda environment? @papr i don't get an output.
conda list gets me the following packages.
Interestingly, there is no opencv. What s the output for python -c "import cv2; print(cv2.__file__)"
?
Yes, you are right! There is no cv2. Could it deleted in between updates? That's the output: Traceback (most recent call last): File "<string>", line 1, in <module> AttributeError: module 'cv2' has no attribute 'file'
@user-908b50 Since Pupil is still running, the anaconda environment is getting it from somewhere else. Our goal is to find that place.
@user-908b50 Since Pupil is still running, the anaconda environment is getting it from somewhere else. Our goal is to find that place. @papr yeap, i know that is in the python3 installation. /usr/lib/python3/dist-packages/cv2.cpython-38-x86_64-linux-gnu.so
AttributeError: module 'cv2' has no attribute '__file__'
Mmh, that is unexpected. Please try the following:python -v -c "import cv2"
/usr/lib/python3/dist-packages/cv2.cpython-38-x86_64-linux-gnu.so How do you know that for sure? The command above should confirm your statement. (Beware, there is a lot of output.)
If this is indeed the version that is being loaded, then our install instructions are incorrect and we will have to revert to the Ubuntu 16.04. OpenCV build instructions.
@user-908b50 Correct, if the git tags are pulled correctly. @papr okay, that's strange! I will try pulling from pupil labs directly. I don't see anything wrong with my git pull.
(pupillabs) [email removed] git pull https://github.com/fiza09/pupil From https://github.com/fiza09/pupil * branch HEAD -> FETCH_HEAD Already up to date. (pupillabs) [email removed] cd pupil_src (pupillabs) [email removed] python main.py player --version Pupil Player version 2.4.73 (source)
To update:
git fetch --all --tags
git merge upstream/master --ff-only
git describe --long
Mmh, that is unexpected. Please try the following:
python -v -c "import cv2"
How do you know that for sure? The command above should confirm your statement. (Beware, there is a lot of output.)If this is indeed the version that is being loaded, then our install instructions are incorrect and we will have to revert to the Ubuntu 16.04. OpenCV build instructions. @papr yes, i will re-install opencv. that above command gets me different errors. basically, it seems like there is no such file or directory. Its odd because when i try this, it tells me there is a cv2 file: find /usr/ -iname cv2.* /usr/lib/python3/dist-packages/cv2.cpython-38-x86_64-linux-gnu.so
i will re-install opencv3.0
@user-908b50 No, please wait. Let's try to approach this systematically.
Please share the output of the above command.
To update:
sh git fetch --all --tags git merge upstream/master --ff-only git describe --long
@papr i get this finally: git describe --long v2.5-0-g70e93bcb. it worked with the force installation. thanks you!
/usr/lib/python3/dist-packages/cv2.cpython-38-x86_64-linux-gnu.so This file is to be expected there if you used apt to install opencv. The only issue is that we need to ensure that is actually being used. 🙂
here it is!
force installation What do you mean by force installation?
@user-908b50 I think there is a small typo in your command. You typed -v
twice. It needs to be -v -c
🙂
What do you mean by force installation? @papr sorry, i meant fast-forward.
thanks for catching that!
extension module 'cv2' executed from '/home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/cv2.so'
Now, we need to figure out, how it got there if conda list
does not even know that it is there.
So I installed using sudo apt and then followed the instructions here (https://stackoverflow.com/questions/37188623/ubuntu-how-to-install-opencv-for-python3/37190408#37190408) to get it to work with my program. In brief, i symlinked openCV.
Ah, ok, so /home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/cv2.so
symlinks to /usr/lib/python3/dist-packages/cv2.cpython-38-x86_64-linux-gnu.so
?
Can you confirm that by running ls -l /home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/cv2.so
?
seems like it: lrwxrwxrwx 1 fiza fiza 65 Aug 6 11:01 /home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/cv2.so -> /usr/lib/python3/dist-packages/cv2.cpython-38-x86_64-linux-gnu.so.
the set-up worked! this is exactly what i did.
Ok, then I must apologize. Our documentation is wrong. Installing opencv via apt does indeed not work.
Please uninstall opencv via apt, and rebuild it using these instructions: https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-ubuntu17.md#opencv
don't worry about it! The set-up does link to the stack overflow. its trial and error anyway.
Please share the output of the cmake -DCMAKE_BUILD_TYPE=RELEASE -DWITH_TBB=ON -DWITH_CUDA=OFF -DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=ON ..
before continuing with make -j2
We need to confirm that the python installation is correctly found such that we can symlink the self-built opencv version into the environment as you have done it before.
This is what I am doing because libopencv files cannot be found otherwise. sudo apt-get autoremove opencv-data
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be REMOVED:
libfprint-2-tod1 libllvm9 libnvidia-cfg1-440 libnvidia-common-440 libnvidia-compute-440 libnvidia-compute-440:i386 libnvidia-decode-440 libnvidia-decode-440:i386
libnvidia-encode-440 libnvidia-encode-440:i386 libnvidia-extra-440 libnvidia-fbc1-440 libnvidia-fbc1-440:i386 libnvidia-gl-440 libnvidia-gl-440:i386 libnvidia-ifr1-440
libnvidia-ifr1-440:i386 nvidia-compute-utils-440 nvidia-dkms-440 nvidia-kernel-common-440 nvidia-kernel-source-440 nvidia-utils-440 opencv-data xserver-xorg-video-nvidia-440
0 upgraded, 0 newly installed, 24 to remove and 70 not upgraded.
After this operation, 80.5 MB disk space will be freed.
Do you want to continue? [Y/n]
sudo apt remove -y python3-opencv libopencv-dev
should do the trick
Please share the output of the
cmake -DCMAKE_BUILD_TYPE=RELEASE -DWITH_TBB=ON -DWITH_CUDA=OFF -DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=ON ..
before continuing withmake -j2
@papr here it is!
@user-908b50 I have to leave (eu timezone 😬).
-- Python 3:
-- Interpreter: /home/fiza/anaconda3/envs/pupillabs/bin/python3 (ver 3.8.5)
-- Libraries: /usr/lib/x86_64-linux-gnu/libpython3.8.so (ver 3.8.5)
-- numpy: /home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/numpy/core/include (ver 1.19.1)
-- install path: lib/python3.8/site-packages/cv2/python-3.8
This is green light to proceed though! After compiling has finished, run the remaining install instructions. It will tell you where it installs the cv2*.so
file. You will have to symlink it in the same way as you have done previously with the apt version.
Good luck! Make sure to delete the old symlink first and let us know how it went. 🤞
@papr alright, thanks a lot for your help!! Good night =]. I will run both source and bundle version and let you tom.
I'm still getting the same error as earlier (attaching the error message) with the source version. Pupil player (bundle) also freezes in a similar manner.
I ran the same checks as last time. I believe the symlink isn't working properly within the conda environment and I am not sure why that is.
@user-908b50 Why do you suspect the symlink to not work as expected?
'cv2' loaded from '/home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/cv2.so'
This looks correct. What is the output ofls -l /home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/cv2.so
?
@user-430fc1 To be honest, I am not sure what causes the 3d ellipse to jump there. The model is stable. The 2d detection looks solid. I ran the offline pupil detection and the jump reduced to a single scene frame. It will take some time to look into this issue. @papr Were you able to investigate this any further? I've been running into the same issues quite frequently. 2d detection is fine, confidence is great, but the 3d model jumps around and is unstable.
@user-908b50 Why do you suspect the symlink to not work as expected? This looks correct. What is the output of
ls -l /home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/cv2.so
? @papr (pupillabs) [email removed] ls -l /home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/cv2.so lrwxrwxrwx 1 fiza fiza 60 Oct 21 20:27 /home/fiza/anaconda3/envs/pupillabs/lib/python3.8/site-packages/cv2.so -> /usr/local/lib/python3.8/site-packages/cv2/python-3.8/cv2.so I can't see why the gui would freeze on the surface tracker plugin, right when I am selecting legacy markers. You suggested the last time this happened that thus could be because of opencv. I believe conda list | grep cv2 does not output anything because opencv was not used to download cv2 anyway. Anyway, the freeze also happens with the bundle version as well. The error messages on the terminal refer to pyglui and cpython packages.
I did leave the program on overnight though. What I can do is try it on Windows. I wonder if its a Ubuntu 20 problem.
@user-908b50
I believe conda list | grep cv2 does not output anything The reason for that is that you symlinked it, correct. error messages on the terminal refer to pyglui and cpython packages. Yeah, I noticed this part, too. Nonetheless, if I understood the problem is that there is an interaction between the background and foreground detection that causes the issue. The traceback might not showing us the real reason for the issue.
Giving Windows a try should fix the issue. Unfortunately, I don't have an Ubuntu 20 instance to reproduce the issue.
yeap, conda was not used to download opencv*
I have Windows too. I will give that a try today.
Hello everyone.
Please is this possible?
The system will command a participant to look at a starting point and then command the participant to look at a AOI A
And then tell the participant to go back to the starting point
Then tell the participant to look at AOI B.
And then tell the participant to return to the starting point.
And on and on...
All these will be in one video
Is this possible?
Please if you have any idea, please let me know. Thank you
@user-7daa32 hi, yes, this is definitely possible using Psychopy or other experiment building software.
@user-7daa32 hi, yes, this is definitely possible using Psychopy or other experiment building software. @papr I am glad it's . Thanks. So I can Google it and download it. Do you have expertise or you know any anyone using it or have used it. A publish source might help. Thanks
Please can capture identify when the participant look at A. The system or capture need to know when the participant has locate an AOI in order to give further commands. I am trying to figure out how to flag times in the videos
Hello, I collected some pupil data from a participant. At the beginning I had a great model from both eyes. But during the capture, the eye model changed as I did not check the "Freeze model" button. Is it possible to freeze the one I had at the beginning and recompute the pupil data according to that model ?
@user-7daa32 You can do this by subscribing to surface-mapped gaze data but you will need to program this functionality.
@user-19f337 Yes, you can run the post-hoc pupil detection. Once the model is well fit, you can pause the detection, freeze the models and restart the detection with the frozen model.
It worked, thanks !
@user-7daa32 You can do this by subscribing to surface-mapped gaze data but you will need to program this functionality. @papr thanks... I am not sure I understand this. You mean I will need to write a code to do this ? I am still using old version of pupil Lab software, is that a new features? Although, I have been using surface plugin.
@user-7daa32 Yes, you would have to write code. This is not a new feature. You would have to setup the AOIs as surfaces and subscribe to the surface-mapped gaze data (like in this example https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_gaze_on_surface.py) to check when the subject is gazing onto the AOI.
Hi, I want to ask you again somthing about the fixation? you recomand we use the default suggestion but I steal have somthing wrong on my data I dont get Normal distribuationand I try lot of plying , please cab you suggest me what to do? how can I send you a file?
@user-b292f7 You can send data to [email removed]
Thank you
@user-b292f7 Please attach an explanation on what you are analysing and what you are expecting.
@papr hello. We used the Apriltags to track the surface, but it is not gaze point shows in the screen. So may I ask Is it still need to calibrate in Pupil Capture?
@user-594d92 Correct, you need to calibrate first before you get gaze data.
I use 2 screen, I put apiltags in the screen No.1 and the stimulu images, and screen No.2 is to show the Pupil Capture. So did I need to calibrate the screen No.2 as well? Thank you @papr
@user-594d92 Please note, the calibration is independent of the surface tracking. Nonetheless, I would recommend doing the calibration on screen no. 1. and making sure that the Pupil Capture window is not visible in the scene video when doing the calibration. Else, you might get duplicated marker detections which can affect the calibration negatively.
Thanks so much @papr I will try it soon 🙂
@user-a6e660 Are you plotting pupil or gaze data? Also, do you split the data by eye id? Pupil data is relative to the eye camera and is therefore not comparable between eyes. @papr I have now divided the data into EYE0 and EYE1 and save the data in a text file, but the values are still not meaningful because they do not change. Here is the code ... Does any of you recognize why there is no change. The data about the recording function seem to make sense, since a pupil rash is detected there.
@user-a6e660 which version of Capture do you use?
Also, could you attach one of the resulting text files? Ideally one for each eye
We are using version V.2.4.0. The data from the text file correspond to a recording of 25s. -> 5 seconds straight out -> 5 seconds left -> 5 seconds right -> 5 seconds above -> 5 seconds below
It's strange that there is a lot more data in a recording. Any ideas?
@user-a6e660 Since v2.0, we run both detectors, 2d and 3d. Therefore, you should get 2 (detectors) x 2 (eyes) x FPS
pupil datums per second.
@user-a6e660 You can use the method
key to differentiate the results from the two different detectors
@user-a6e660 Also, yes, it is indeed weird that your text files only include 140 samples each. I would suggest saving the timestamp, too, to check if your script drops intermediate results or if your scripts starts recording too late/stops too early.
Hello @user-a6e660, is it possible that you process too much data with Python and the computing power of your computer is not sufficient ? The data you store in pupil_position is very large, this can lead to runtime delays. Have a look at the task manager to see what percentage of your CPU is running. @papr is there a way to get the data directly from the data stream instead of saving them all?
Yes @user-b7ea86 I looked at the utilization and it runs at 100%. It also looks as if the data are written to the serial monitor (python shell) with an extremely delayed amount of time, you can reduce the data stream to a certain extent, e.g. only save every 10th data frame.
@papr Thank you for the tips. How do I then apply the method in Python exactly?
@user-a6e660 I just saw that you are opening and closing the file handle for each received datum. This is definitively too slow, as predicted by @user-b7ea86. Just store the values in an array, and once you are done recording, serialize the array. Regarding the method, you can access it via pupl_position["method"]
it either returns 2d c++
or 3d c++
.
Also, yes, I would not print the results to the shell as this can be very slow as well, depending on which terminal you use.
Hi, I just collected data using a eye-tracker from pupil, I applied the post hoc protocol to get the unix timestamp, but apparently something went wrong because I got timestamps from 2019, someboy have had this problem? has somebody figured out how to fixed this problem?
@user-265907 Could you share the info.player.json file with us?
Yes, jus give me a moment
Info
Hi, I have recorded head pose data, as well as gaze, blink and fixation data. I am wondering while my recording variables are all empty. e.g. 2020_26_10/000/gaze.pldata, 2020_26_10/000/gaze_timestamps.npy and so on Is there any reasons for this ?
@user-3ede08 Have you made sure that you calibrated before you started your experiment? You should see the gaze point once the calibration was successful.
@user-265907 What post-hoc protocol are you referring to exactly? This one? https://github.com/pupil-labs/pupil-tutorials/blob/master/08_post_hoc_time_sync.ipynb
Then I have exported the data (using pupil player). There is a file called pupil_position.csv which seems to contain data concerning the 2 eyes. I don't understand why there is always 2 pupil_stamps corresponding to a unique (the same number twice) eye_id. Suppose I would like to plot pupil_timestamps vs confidence for each eye, how would I use this files data ?
@user-3ede08 Have you made sure that you calibrated before you started your experiment? You should see the gaze point once the calibration was successful. @papr I think so, because head pose data are available.
@user-3ede08 So which files are empty and which are not? Pupil positions has each timestamp twice, because we run 2d and 3d detector in parallel. Check the method
column to see which is which.
There is a ||method_id|| column which alternatively contains the number 1 and 6.
When I look at the two first values of the pupil_timestamps, I thought, each of them corresponds to one eye.
@user-3ede08 So which files are empty and which are not? Pupil positions has each timestamp twice, because we run 2d and 3d detector in parallel. Check the
method
column to see which is which. @papr I think the recorded files are empty, and they are data in exports file
@user-3ede08 I think you are mistaking model_id
for method
@user-3ede08 If the export contains data, and you did not run post-hoc detection, the intermediate files are not empty.
@user-3ede08 I think you are mistaking
model_id
formethod
@papr Oh yes, you are right.
For a single eye_id and at a specific timestamps we have the 2d and 3d model. But we don't have the 2d and 3d of the other eye_id at the same timestamps. Does it mean that it simultaneously records data ?
@user-3ede08 If the export contains data, and you did not run post-hoc detection, the intermediate files are not empty. @papr I got it, the problem was the calibration.
For a single eye_id and at a specific timestamps we have the 2d and 3d model. But we don't have the 2d and 3d of the other eye_id at the same timestamps. Does it mean that it simultaneously records data ? @user-3ede08 correct
@user-b292f7 Please attach an explanation on what you are analysing and what you are expecting. @papr Hi can you please check if you got my emails?
@user-b292f7 I can confirm that we received your emails. I still need to review the data. I will come back to you via email.
Thank you!!
Hi, I'm sorry to come back to you again, but I still have a problem about "shifted" data. I tried what you told me, calibration post hoc, and the plugin you gave me. It's a bit better but I still have the problem. It seems that it's always the same for all participants : the data are systematically shifted towards the center. I send you a picture as an example : hotspots are not exactly on what the participant really stared at, they're a bit more centered... Is it because the target is too small and the device cannot measure such small variations in gaze? Is there any other solution ? Thank you again !
@papr yes, that one.
@user-265907 Interesting. Do you get this issue for all your recordings or only for this particular one?
Let me see, I just prove the protocol with one recording.
@user-b259f6 I think the target is sufficiently large. I think the center-bias might be related to the 3d calibration. Have you tried 2d gaze estimation already?
@papr yes I did it with 2d gaze estimation
@user-b259f6 Actually, I misunderstood your previous question regarding the target size and need to correct my previous response.
The target size depends on your calibration/validation accuracy. The lower the accuracy, the large the target needs to be.
@papr ok I understand. So if the problem persists even after a post hoc calibration to try to have a better accuracy, there is nothing to do ?
@papr this error only happen with that recording, tomorrow I will collect more data and would try to put attention, to identify the error.
Hi. is it possible to use the data export of the pupil player without opening the player?(i just want the pupil x and y positions) (im new in this, sorry if the answer is too obvious).
@user-765368 Not with first-party tools. Check out the "Scripts" section of our community work. It includes some example scripts that extract data from the intermediate data format without having to open Player for that. https://github.com/pupil-labs/pupil-community#scripts
Hi All, I am looking at getting an eye tracking solution and my choice seems to be between the Pupil Lab and Tobii. I like the thought of the Pupuil lab and open source but some reviews suggest the headset moves a lot and calibration fails. Any advice or links to information on the two wouild be really helpful. Thank you.
Hello there! I'm new to Pupil Labs -- and eye tracking in general. My goal is to measure where individuals look as they walk along a 6 meter gait mat while they hold a tray with stable or tippy objects. I'm looking for advice on how to best calibrate the eye tracker for this task. I think during an experimental trial, they will look ahead toward the end of the mat as well as at the tray they are holding (particularly if the objects are tippy). Is it possible to calibrate for both the environment and the tray? Should I use the printed calibration markers and if so, how many should I use and what would be good locations to place them? Looking forward to learning from everyone!
hey @papr
Does this script made by you still works with current Pupil player?
@user-c563fc I think so, yes
@user-c563fc I think so, yes @papr and how does it works, I put the script in recording folder and just run the script?
@user-c563fc you run the script and pass the recording folders. The resulting csv file will be written into each according recording folder
Hi 🙂 I got expert data [gaze_positions.csv] from Pupil Player, and how can I calculate the pixel coordinates from normalized coordinates?
Hi! I want to use the online blink detection plugin in order to detect forced blinks and use them as human-machine interface input sources (i.e. discrete events that activate something, depending on the number of consecutive blinks). However each time that I blink the detector triggers kind of a random number of onset and offset blinks (e.g. onset, onset, onset, offset, offset, offset). For me, 1 blink=1 event (don't care if it's onset or offset). I tried to play with the plugin parameters and managed to deactivate onset or offset by setting the threshold to one, but still I obtain more than one event per blink. Do you have any suggestion? 🙂 thanks
@user-6f397b I would suggest writing to info@pupil-labs.com in this regard. Also, I can recommend having a look at the publications citing Pupil Core https://pupil-labs.com/publications/ Maybe, there are similar projects to your which you can use as reference.
@user-8d1ce2 I think using single marker calibration with physical markers makes most sense for you. Please make sure that the eye cameras setup such that the pupil is visible in both environments. Especially, when looking down, it is possible that the pupil might not be always easily visible. Adjusting the scene camera down before doing the calibration might also be necessary.
@user-594d92 You need to know the video frame size in order to denormalize the positions. See Cell 10 of our frame extraction tutorial for details: https://nbviewer.jupyter.org/github/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb
@user-03a2fe A blink consists of an onset and an offset. Due to the real-time aspect, you might multiple detection of the same event (onset/offset). Your deduplication strategy depends on your requirements. Lowest possible latency -> use the first encountered onset. Highest accuracy -> use the onset with the highest confidence.
That said, I do not think it is easily possible to differentiate "forced" and "natural" blinks.
@papr in this photo range is represented with 2.6-5.1mm. mm here is millimetre?
@user-c563fc correct
@user-c563fc I think I need to correct myself. I think the y axis limits are -2.6
(not +2.6
) to 5.1
. The term "y axis limits" might be more accurate than "Range" here.
@papr Hi, I'm really sorry to insist but I really can't find a solution. Whatever I try, the fixations are not exactly where they should be, there is still a shift, for all my participants. Do you think you could have a look to an extract of my data, to see if there is nothing to do ? If it's possible for you, I can send you what you need of course. Thank you for all your answers
@user-b259f6 Please send one of the recordings that is especially difficult to correct to [email removed] We can have a look at in the next week.
Thank you so much I'll do this !