Hello, I am looking for some advice on aligning in time and just wondering if my ideas are feasible. I see Pupil Core is compatable with a "lab streaming layer" to align in time... I am not too familiar with this. My experiment will involve torque, EEG, and EMG data where there are all aligned using a DAQ sending a second of +5v blip to signal start of recording. This data aquisition is initialized in MATLAB. Ideally... I can use MATLAB to integrate the start to the data and simultaneously start the pupilcore recording, or at least signify the start of the trial in the pupil core data. As of now, I am only interested in pupil diameter. What are your thoughts and suggestions? I notices pupil has matlab integration, and I also noticed the LSL files were all python.
Hi, @user-63b5b0 ππ½ - LSL does seem to be a popular choice among researchers for synchronizing data collection across multiple systems, and I definitely think it would be worth your time to learn more about LSL, how it works, and why people use it. I'm not directly an LSL user myself, but the impression I have from interactions with others is that people generally use the LSL "LabRecorder" app to converge all of their datastreams to a single .xdf
file, which is then loaded/processed with their tool of choice (matlab, for example)
Hi, I have a very important question. I am using the Pupil Labs Core headset but the front camera is broken. To replace it, as a temporary solution, I have put a logitech C615 camera attached to the front camera (the position between the camera and the eye cameras is now different but the difference is small). To calculate the intrinsic parameters I run the plugin "camera intrinsic estimation" with radial distortion and I see the parallel lines. To have a good gaze accuracy I do the calibration with 5 markers. My question is, I don't have to indicate in any file the new position between the cameras because I understand that with the calibration the problem of having the cameras in different positions is solved?
Hi @user-6eeda2 ! Few things, calibration correlates the calibration markers' positions in the scene camera coordinates with certain parameters of the eye like the pupil's centre position (this depends on whether you choose the 2D or the 3D pipeline).
The relationship between the world camera and the eye cameras should not matter for the calibration, e.g. in the VR headsets we use a virtual camera.
Reg. your second question, the distance can matter, more if you use the 2D pipeline, which has no cognition of 3D eye vectors. If you calibrate at close distances (0.3-0.6m) but your task is looking at far distances (5-6m), you should consider the accommodation/convergence ratio. When we look at closer objects, not only our crystalline lens changes in shape to see objects sharp, but our eyes also converge at that point, meaning that convergence (more inwards pupil position) would be carried with the calibration. You should try to calibrate at the distance you would like to perform the task.
Additionally, it would matter but for example if the monitor only covers a small portion of the field of view, the gaze outside that rectangle shown after calibration would be of worse accuracy as it would have to extrapolate.
Without seeing the recording, it is nonetheless hard to give you some further feedback, so feel free to make a recording inc the calibration choreography (this is, starting the rec, then calibrating and then doing the task) and send it to data@pupil-labs.com so we can check it and give you more concrete feedback.
Another question to see if I am understanding how calibration works. I perform the calibration with 5 markers at a certain distance using a display. If I don't move, the calibration is good but if I move and get closer to the screen or move away the gaze position is no longer correct. If I want to use the glasses while moving, how do I get a good gaze position?
Hello. i used the divice normally on friday and now camera for eye 0 is not working, i've not changed anything in settings, now it says "local usb: camera disconnected!"
Hi @user-038ca2 ! We have also received your email, we would follow up there.
Thank you @user-d407c1 !! I understand. How can I consider the accommodation/convergence ratio in Pupil Labs app?
One way would be to make the calibration at the distance at which you would perform the task. Besides the screen markers, you can also use the physical single marker https://docs.pupil-labs.com/core/software/pupil-capture/#single-marker-calibration-choreography and move it around, or you could use the natural features. https://docs.pupil-labs.com/core/software/pupil-capture/#natural-features-calibration-choreography
Also, if you plan to record at diff distances, you may want to present the marker at diff distances and use the 3D gaze mapper. https://docs.pupil-labs.com/core/software/pupil-capture/#gaze-mapping-and-accuracy
Thanks for all @user-d407c1 ! Have a good day π
u2! π
Hello! Maybe I'm asking a very rudimentary question, but I can't seem to complete the screen marker calibration. I'm running Pupil Capture on Windows 10. When I select the Screen Marker and start the calibration I get a flashing screen with a marker at the centre. Is there a way to solve this?
Hello, @user-a793a9 - I haven't heard/seen that problem before. Could you share a video? You may want to make sure your graphics drivers are up to date
Hello! I had a problem that I had not been able to figure out. I wanted to call the foreground camera of the Pupil Core in python to get the image information for further processing, but I had not been able to find a way to solve this problem. How can I successfully call Pupil Coreβs foreground camera in python? π
Hi, @user-fafdae - you can receive video frames from the Pupil Core system using the network API. Here's a sample script: https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames_with_visualization.py
Thanks for your answer! I will try it later, hope it'll work! have a nice day
@user-cdcab0
π π
Hello i am new in the pupil lab community. I am trying for a university project to make an android application that takes image from the pupil core glasses.. does anyone know how can I connect the glasses with kotlin or java code with my android app in android studio?
Hi, @user-3dc01d ππ½ - you'll want to take a look at the Network API (https://docs.pupil-labs.com/core/software/pupil-capture/#network-plugins). We don't have any official samples using Java, but you will need to use some ZMQ bindings for Java (https://zeromq.org/languages/java/). With that, it shouldn't be too difficult to port one of our Python examples (https://github.com/pupil-labs/pupil-helpers/blob/47ce5d4f99488492a4481a629fc7325c6107fbb6/python/recv_world_video_frames.py).
Hello, I am currently in the process of labeling all my surfaces to my pupil core recording in pupil player. Is there a limit to how many surfaces you can label in pupil player? I have run into a problem where adding 130+ surface markers (doing a social interaction experiment with many markers) disables my recording from being open in pupil player after I finish add the surfaces and want to re-open the recording. Sometimes pupil player will just open the eye1 video (to download the fixations), but nothing else before shutting down. Other times, pupil player will just not open if the pupil core recording includes the file with my 130+ surfaces. If I just do it with a small amount of surface trackers (4 surfaces), pupil player opens my recording just fine. I used the april tags for my surfaces.
130 surfaces or 130 markers? Either way that's a lot, but I don't believe there's any hard limit being applied here - you're likely just limited by PC resources. Are there any error messages in the console when Player shuts down unexpectedly?
Hello, I am working on research that requires eye tracking, If I purchase the Pupil Core, is there a way I can make it mobile. I mean connecting to a phone rather than a computer to collect my data?
Secondly, if I purchase a Neon (from my understanding it connect to a mobile device), would the software be free?
Thanks
Hi @user-f99202 ! Neon provides a more robust solution to tracking for dynamic situations. It is indeed connected to a phone (included in the bundle). You will also not need to calibrate the system and it works flawlessly in outdoors. So, Neon is definitely the recommended solution if you want to go mobile. And yes, the software is included!
With that said, some of our customers have made their existing Core systems more portable by using a small form factor tablet-style PC in a backpack, or use a SBC like a Raspberry PI to stream the videos.
Hello! I tried using my own camera. Previously, I had a 640400 30fps camera, and after changing the driver to libusbK, it could perform pupil recognition. Now I've switched to a 640400 240fps camera. After changing the driver, I found that it cannot run properly. It gives the following error message. What could be the issue?
I tried changing the resolution, but there's still no image in the window. I also tried running 'camera intrinsic estimation' as suggested, but I can't see the camera's image during the process, and it doesn't seem to work at all. What should I do? Is it possible that this camera is not compatible with Pupil Lab's software, or could it be because this camera has a higher FPS? I've used two 30fps cameras before, and they worked fine.
Hi Team, I have two questions concerning Pupil Player using Pupil Core: 1. If I use 2 (more than one) map gazers with 2 individual calibrations mapped against one section of the data for validation, the accuracy result will show as βnan degrees from 0/33 samplesβ, so how can I get the exact degrees? 2. Once I can get the exact validation result, how do I apply that to tweak my old fixation points/coordinates? And how big the accuracy/precision result will make it unreliable? 3. After the gaze mapping task, for some reason I could only have the fixation points and their number shown for the section I have gaze-mapped. How can I have the all the fixations again and with post-hoc correction? Thank you very much in advance!
Hi ! Could you tell me where is the gaze confidence computed in the pupil software ? I am having some difficulties understanding the exact relation between the pupil confidences and the gaze confidence
Hey @user-c68c98 π Gaze confidence is a product of pupil confidence, obtained by computing the mean of the confidence values associated with the relevant pupil data utilized in the calculation of the gaze datum. For further info on the process of matching pupil data to compute gaze data, please refer to our documentation (https://docs.pupil-labs.com/developer/core/overview/#pupil-data-matching)
Hi! I'm playing around with Pupil Core and have carried out some tests in orden to graphic the gaze results. Would you say these plots make sense? I'm a bit worried because of the wakes and the proximity between ayes and wondered if this is just normal or I might have been doing something wrong... Thanks!
I hope to receive your assistance. Thank you very much!π
Hello ! When plotting the confidence for 3d model (high light intensity condition), there seams that there is a threshold 0.7. Why is this ?
model_confidence is a weighted average of 2D confidence (the number of pixels on the circumference of a fitted ellipse) and a value that measures how well the model fits to the current pupil observation. The optimal value here is 1. There is no threshold of 0.7, but rather you're probably not achieving a well-fit model.
The model fitting process is shown in this video: https://youtu.be/_1ZRgfLJ3hc
Does your's look similar, i.e robust red ellipse overlaying the pupil, and a blue circle surrounding the modelled eyeball?
Hi @user-52bb4d π. Thanks for sharing the plot. Would you be able to share a bit more information about the tests you have performed and the data you have plotted? Otherwise it's difficult to say whether the data are as expected.
Hi @nmt. Sure, I'm working on analyzing the saccadic horizontal movements of both eyes. I have implemented a simple test in Psychopy in which a stimulus appears and disappears on the x-axis, and the image submitted is the result. The objective is to understand how these movements work, studying parameters such as the latency or the gain. I'm also facing some trouble regarding this because I'm not sure how to interpret the timestamps provided by the gaze_positions.csv and adjust them to "real time"
No, for example, for my left eye, i have a decent long term model (blue) and 2d detection (yellow) but my red ellipse that keep sliding
Then you'll probably need to optimise a few things to get a better model. Could you share a screencapture of the eye videos? That will help us point you in the right direction!
Here is an example :
Aha. Outdoors in direct sunlight can sometimes be tricky. Are you able to share the full recording folder with [email removed] If you can, upload it to a file hosting service, like Google drive, and share the download link.
I'll do, thanks you
Hi Neil, I am not sure I am understanding you question as well? As I mentioned in my previous post, I median filtered and then interpolated the x_norm and y_norm surface gaze values. Now, we are trying to re-indentify fixation using the I-DT alogorithm as well as detect saccades. When trying re-identify fixations, we need to calculate dispersion using teh gaze point values (visual angle). But, the visual angle is only outputted in the gaze_positions.csv file. In the surface data, we have both gaze and world (multiple) timestamps. So, I am just wondering how to best reconcile cases where you only have visual angle for two gaze timestamps, but not the rest (for a specific world index and/or world timestamp). I hope that makes more sense?
gaze_positions.csv
and gaze_positions_on_surface.csv
share the same timestamps. In other words, each surface-mapped datum in the latter will have an equivalent in the former. If your interpolation method achieves evenly spaced samples, and you also use it on the visual angle data, then there should be an equal number of matching timestamps in both of your new datasets. Can you not just match them rather than involving world timestamps?
PsychoPy, Core and Saccades
Anyone familiar with using ZMQ to realtime print pupillometry data? I have coppeid the same scrips in the documnetation from the IBC back bone... But I am getting the error : ZMQError: Address already in use ....Is this the address for my Port?
Are you seeing that error message from your code or from Pupil Capture? Can you share your code with us?
HI Dom, Yes. I am getting my info from this documentation: https://docs.pupil-labs.com/developer/core/network-api/
The code you're using isn't valid and doesn't match our examples. The error you're seeing is because you're attempting to start a new ZMQ host (the socket.bind
line), when you should be creating a ZMQ client and connecting to Pupil Player's host. Further, the data aren't formatted in the way you're trying to consume them.
Have a look at this example - it already does almost exactly what you're trying to do here. And let me know if you need further help π https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py
I am using two screens in my experiment. One will be my computer, where I hope to control the recording of the pupil lab device. The other will be the visual feedback for the participant shown on a 32 inch flatscreen TV approximetely 5 feet infront of them. Is it possible to brodcast the calibration onto a screen that is not connected to the default computer screen responsible for controlling the device?
The calibration is defined between the cameras on the headset and the wearer's pupils - it is agnostic to anything in the scene.
It sounds like you'll want gaze coordinates in screen space, so you'll need to use the Surface Tracker plugin. A surface can be defined on anything flat - it can be the computer screen that's running the Pupil Capture software, a completely different computer screen, a poster on the wall, a book, an area on the floor, etc. More info here: https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking
I have conducted several experiments and the confidence is always less than 0.6. What should I do?
Hi, @user-130009 ππ½ - it sounds like your models aren't well-fit. There's some good info here https://docs.pupil-labs.com/core/best-practices/ about best practices that's definitely worth reviewing. A good fit should like something like it does in this video: https://www.youtube.com/watch?v=_1ZRgfLJ3hc
pupil data is here. @user-cdcab0 thank you
Hello. Is there a way to produce a notification when an event begins (i.e fixation begun) and not only when it ends? As I see it now, the fixation detector produces a fixation message only when it ends.
I haven't looked at this directly, but it makes sense when you consider how fixation detection works. The algorithm really can't know that a fixation has started until at least some amount of fixation has already occurred
It is logical, but as you said, a fixation has started and then continues. For example, I want to produce an event in an app or a test while someone is fixating somewhere, doesn't producing the event at its end prevent me from doing that?
It's feasible that one could modify the code to emit a fixation notification earlier than the end of the fixation, but it will never be able to be emitted at the moment the fixation started. It will always be late and retroactively timestamped
I see. If it's a little late I guess it shouldn't be that bad, it makes sense.
hello π I have a question concerning the eye camera window. What does the blue on the side of the window represent? I guess it is the intensity histogramm of the image and the cyan blue line represents the intensity treshold you can choose in the settings ("Pupil intensity range"). what are the red and white lines? Also what do the yellow regions around the ROI mean?
Hi @user-ffe6c5! You can see a description of those visualisations in this paper: https://arxiv.org/pdf/1405.0006.pdf (fig 4)
Hi, we just bought a new core device, but when i plug it in my laptop and start the pupil capture, it can't get the input from the cameras, even though it did keep showing "found the camera". Is it the problems of settings or the device itself ?
I found the resolution from the website, it turns out that you have to start adiministrator privileges to get access to the camera feed first, thanks π
Hi, I want to ask about the camera device. I have 2 C922 webcams, when I run Pupil Capture, the software cannot open 2 webcams simultaneously. Is it because the camera name is the same? and if that's the case, how do you change the camera name to Pupil Cam3 ID0 or Pupil Cam2 ID0 and so on?
Hi! I found out that after gaze mapping two gazer mappers with two individual calibrations, and turning on the "Activate gaze" button for both gaze mappers, there is one fixation point shown (with the order of the numer) but not overlapped with either of the gaze mapped points. My questions is how do I know which gaze mapper is the fixation point processed from? Thank you very much in advance!
Hey, @user-6072e0 - I don't have two identical webcams, so I can't explore this myself unfortunately, but it does look like the names would need to be unique. However, I don't think there's a way to make two identical devices have different names.
Looking at the Windows driver installation procedure for our headset (https://github.com/pupil-labs/pupil/blob/c309547099099bdff9fac829dc97b546e270d1b6/pupil_src/shared_modules/video_capture/uvc_backend.py#L161), the three cameras each have a unique product ID, and so they can receive unique names. If your cameras are identical, they will likely have identical product IDs
Thanks for your answer, I've tried changing the name of the webcam but it doesn't work so maybe I will buy different webcam. π
Hi, @user-daaa64 - I tinkered with this a bit, and it looks like the fixation detector uses the average of the active gaze mappers. So if you have two, the fixation point shows halfway between them.
Do you mean to have multiple gaze mappers active at the same time though? Typically you'd want to adjust the start/stop time of each gaze mapper for just the sections they apply to. To do that, set the trim marks by dragging the ends of the green rectangle on the timeline at the bottom of the screen so that the rectangle encloses the section of time a gaze mapper should be used, then, with that gaze mapper selected, press the "Set From Trim Marks" button to apply.
Thank you very much for the reply! I just happened to have multiple gaze mappers active since I was trying to find which gaze mapper/calibration can be the best fit for my data. The recording is long with multiple calibration and validation processes throughout my experiment session so that's why I have to have multiple map gazers against the same section. I thought only in this way, I could check their each map gazer's accuracy result and whether the post hoc fixations have been "on point" by checking the calibration/validation section of the recording after post hoc. Do you think I'm on the right path? π€
Hello. I have some world.mp4 files from which I want to extract frames automatically. I'm using opencv and passing the frame number as it shows in pupil player. However, I get different frames. The first image is a capture from pupil player (frame 4434). The second one, frame 4434 from the video. My question is: what is the correspondence between pupil player frames and the actual video frames? I imagined it would be a 1 to 1 correspondence, but maybe not?
Hi @user-5ef6c0! OpenCV is known to be unreliable when extracting frames. You're probably better off using a tool like ffmpeg or decord, which another customer recently had success with: https://discord.com/channels/285728493612957698/1047111711230009405/1151947182702874706
thank you, @nmt . I am exporting frames with ffmpeg now and I still have the same issue. However, openCV and ffmpeg do extract different video frames when provided with the same frame index. My question is: is there any encoding that occurs when one exports the videos from pupil player? Please note what I am using the world.mp4 file that is in the recording folder, and not an exported version.
Just to build on top of my colleague's answer, have you seen these tutorials? https://github.com/pupil-labs/pupil-tutorials/blob/master/07_frame_extraction.ipynb
indeed, using the exported world.mp4 works
Hello, I have several recordings that range from 12 to 15 minutes in length, and I tried to export data, specifically fixations on surfaces, blinks, and iMotions data, using Pupil Player v3.3.0. While it successfully works for some recordings, it throws an exception for others ( check out the attached log where it happens). Any clue why this is happening and how I can fix it?
Hi, @user-8f48c2 ππ½ - doing a quick comparison of the code from 3.3 with the most recent release (3.5), it looks like a bug that was already fixed. The code now includes an explicit check for for that case
Thanks for your response @user-cdcab0
Hello all, what is the minimum time required for calibration with the Single Marker to achieve the best possible results? Thank you in adavance!
Hi @user-dfd547 π Achieving the best possible results with a Single Marker calibration is not solely a matter of time; it's more about capturing a sufficient number of gaze angles and the extent of the field of view covered. To learn more about this calibration choreography, I encourage you to explore our documentation, available at this link: https://docs.pupil-labs.com/core/software/pupil-capture/#single-marker-calibration-choreography
Thank you π
hello οΌI have a questionοΌhow to transfer gaze position to fixation data
Hi @user-735418 ! In Core there are already two fixation detectors included. An online detector for Capture and an offline detector for Player, which you would have to enable on the Plugin's manager (https://docs.pupil-labs.com/core/software/pupil-capture/#plugins) Here you can find more information about how these fixation detectors work https://docs.pupil-labs.com/core/terminology/#fixations
thank you so much
Ah, I see. If you want to compare fixations in that case, you'd have to do them one at a time
Do you think I should proceed with creating individual map gazers with each calibration/validation and validating them to compare their accuracy results? And I suppose the eye data from validation (after the calibration process in the recording) can be used for map gazing too? Since reference markers are detected from the validation stage as well?
Is there a streamlined method to accurately quantify the fixation count and duration time on an AOI (Area of Interest) from an exported file of eye-tracking data? I'm seeking an efficient approach to avoid manual counting of participants' eye fixations. Any insights or tools that can expedite this process would be greatly appreciated!
Hi @user-be0bae π ! I assume you are using the surface tracker, are each surface an AOI or do you have multiple AOIs in each surface?
two AOIs
Thanks for your replyοΌ I want to obtain the fixation count and duration time of each AOI. Is there a simple wayοΌ
Are you comfortable using Python? This tutorial shows you how to compute pupil size for each fixation on a surface https://github.com/pupil-labs/pupil-tutorials/blob/master/06_fixation_pupil_diameter.ipynb it is not exactly what you intend, but is good starting point.
That said, you do not need to know how to use Python to obtain what you want, you can simply use Excel. If you only defined one surface in the surface tracker but have 2 AOIs, you will need to know the coordinates of your AOIs and use the norm_pos_x
and y
to find whether it belongs to the first or second AOI. I recommend that you create a surface
per AOI
(you can reuse the same markers and simply adjust the size).
Once you have a surface per AOI, it is much simpler, as you should have one fixation_on_surf_XX.csv
file per surface. The first step then is to filter using the on_surf
column to ensure that the fixation was on that surface (TRUE
) and then make a count
of the rows and a sum of the duration
columns to obtain the number of fixations and total duration on that AOI.
Let me know if there is anything confusing, I just had my coffee β
Thank you very much, I understand what you mean. I attempted to define a surface for each AOI and then generated two filesοΌβfixations_on_surface_Surface 1β and βfixations_on_surface_Surface 2β. I filtered the data for "on_surf=TRUE", see the screenshot.
Now, I have a new question. Do you mean that each row represents a fixation point? Because I noticed that there are multiple rows of data for the same "fixation_id".
Sorry, the fixation on surface may have multiple instances of the same fixation due to localisation, you will also have to filter them by unique
ids.
@user-d407c1Hi,If I want to see how pupil core calibration is calculated, which specific website should I look at? For example: Code for specifically calculating the calibration visual degree.
Hi @user-873c0a ! Pupil Capture software is open source, you have the whole code at https://github.com/pupil-labs/pupil/ but specifically calibration https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/gaze_mapping/
It's a valid strategy, but potentially a lot of work if you have a lot of subjects/recordings to go through
Thank you @user-cdcab0! Yes I know, but I have to ensure the quality of my data for further analysis. So far it's the best way I can find.
Hi, I'm doing a reading experiment, and one of the matrices I'm calculating is visual span. Has anyone tried to calculate it using pupil labs software? I have been trying to create an API Plugin (https://docs.pupil-labs.com/developer/core/plugin-api/) that does OCR on World Camera to find out the location of each letter also uses fixation to calculate it.
But maybe because I used light OCR module (tesserocr
) so the detection results were not good. Can we get a visual span from the data published by the pupil software? maybe like fixation, blinking, etc. Thank you in advance.
I am trying to get rid of any delay associated with computing pupil data in the pupil capture software. I assume it takes time to compute pupil diameter from video recorded by "eye". Is there a way to record JUST the raw data (which would allow me to align in time with other device equipment) and then LATER analyze the pupil data to find pupil diameter?
Hi @user-63b5b0 ! Video frames are timestamped when received in Pupil Capture, I assume you mean you want to get rid of the delay to publish the data through the network API due to processing it, is that right? You can disable any non-required plugin to reduce the workload. That said this seems to me unnecessary, if you simply want to sync with other devices, you can simply use the Lab Streaming Layer plugin https://github.com/labstreaminglayer/App-PupilLabs/ which kinda takes away the burden of time sync. Or use the network sync https://github.com/pupil-labs/pupil-helpers/blob/master/python/simple_realtime_time_sync.py
Hello, I bought the DIY pupil lab eye tracker and am in the process of setting it up. I tried to install the pupil software (Linux version) on Raspbian but it would not install. Could you guide me on how to get it installed on raspbian?
Hi @user-f99202 π ! Pupil Capture has no native support for ARM processors like the Raspberry one. Please have a look at my previous message on how to try to make it work or alternatives. https://discord.com/channels/285728493612957698/285728493612957698/1140896658574557204
]
Hi @user-6072e0 π ! OCR in the scene camera can be challenging and highly dependable on the lightning conditions, but perhaps you can use the surface mapper with Apriltags to define a page and define sub AOIs inside. You can still use OCR to detect the characters location on the image and use the fixations on surface csv for example.
Here you have a tutorial on how to aggregate data over an image surface https://github.com/pupil-labs/pupil-tutorials/blob/master/02_load_exported_surfaces_and_visualize_aggregate_heatmap.ipynb, I would also recommend that you have a look at https://github.com/pupil-labs/pupil-tutorials on how to work with the data post-hoc.
So, in a nutshell: 1) place AprilTags to your reading document and record it with the Surface Tracker https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker 2) localise the characters coordinates on the image using whatever approach you find more suitable, eg. OCR 3) use the fixations and gaze on surface files to determine when the subject looks at the page and using the on surface coordinates define what character/word is the subject looking at.
Let us know if you got stuck at some point
Thanks for your answer, I'll try that and contact you again if I face any problems ππ
Sorry, I just tried it after a long time, I want to ask, the gaze coordinate that we get from 'gaze_position_on_surface-name.csv' file is based on the surface, but the ocr result is based on the image coordinate. How do we compare that, so we know that the gaze lies on a certain word? Thank you in advance.
Hi I have a few questions about Pupil Core Question1: Is the 3D Gaze point coordinate in the exported file in the world camera coordinate system? Question2: Is the eye center 3D coordinate in the pupil camera coordinate system? Question3: How to obtain camera parameters. Question4: How can the files in the red circle in the following figure be opened? Thanks
Hi @user-4bc389 !
1) gaze_point_3d_x - x position of the 3d gaze point (the point the subject looks at) in the world camera coordinate system as defined in our documentation (https://docs.pupil-labs.com/core/software/pupil-player/#gaze-positions-csv). You can read more about the coordinates systems we use here https://docs.pupil-labs.com/core/terminology/#coordinate-system 2) In world camera coordinates, please refer to https://docs.pupil-labs.com/core/software/pupil-player/#gaze-positions-csv 3) Which camera parameters would you like to obtain? 4) Please refer to this previous message on how to read .intrinsincs files https://discord.com/channels/285728493612957698/446977689690177536/913512054667747389 or https://github.com/pupil-labs/pupil/blob/c309547099099bdff9fac829dc97b546e270d1b6/pupil_src/shared_modules/camera_models.py#L455
thanks
Hi After replacing the World camera, are the Gaze coordinates also under the new World camera coordinates? thanks
Hi @user-4bc389 π after replacing the world camera, it is crucial to re-compute the camera intrinsics to ensure calibration accuracy: https://docs.pupil-labs.com/core/software/pupil-capture/#camera-intrinsics-estimation
Thank you
That means the coordinate system won't change, it's still under the world camera, right?
Yes, that's correct. The gaze coordinate system remains consistent regardless of the world camera being used.
Can I please also ask that why the validation seems not working sometimes? There are fixations detected in other gaze mappers (with the same calibration but longer time span) that include the same time section in this trimmed period. It can also happen that the validation shows result after I set the trim marks and sometimes it just detected way less samples or no smaples at all. Please see attached. Do you know how can I fix it? Thanks in adavnce!
Hello! reaching out for hardware help. I would Ideally like to measure the diameter of two pupils. My core's right infrared camera only occasionally works. I can rely on left infrared camera working, and working well. Any suggestions for troubleshoot? Or repair processes? Additionally, when I try to open the right eye camera (eye 0) it says "Local USB Camera Disconnected!"
Usually that indicates a bad/poor calibration. I'm not sure why two gaze mappers with the same calibration would have different accuracy results. Do the gaze mapper time spans include the calibration routine?
Also, I found out that sometimes you might need to recalculate the map gazer and set the trim mark as the full video and the accuracy result might be generated (and sometimes in a different value) but sometimes it doesn't... still, the accuracy results for particular sections of a video can be tricky to get... It can be very frustrating since the video is quite long and you have to wait for quite some time but it can turn out to be invalid results again...
I suppose the different accuracy results could be potentially related to slippage since they are mapped against towards different stages in an experiment? And yes, the gaze mappers in these two screenshots include screen marker and single marker calibrations. All my gaze mappers contain screen marker and/or single marker processes and I used these mappers to map against the formal experiment procedures. Sometimes I will check if the gaze mapping is working by observing if the fixation points are "on point" enough in the natural feature calibration which is sometimes part of my "Map gaze in" span. Also, I found out that the issue of "no values" or "no samples found" (shown in the screenshots) often occurs when I set a particular section to be gaze-mapped in "Map gaze in" or be validated in "Validate in". If the span for being mapped gazed or validated happens to be the full video, the accuracy results can normally be generated. π§
Hi! I am conducting an experiment to observe eye movements after rotation. My computer is Mac, and the glass is Neon. I have a few questions regarding Neon. Q1: Is it necessary to connect Neon to a computer and use Pupil Capture to collect eye movement data such as pupil position data? If Neon is only connected to a mobile phone (Pupil Companion), can we only obtain a video of eye movement? Is that correct? Q2: How should I use the raw data exported from Pupil Cloud? I am unable to import them into Pupil Player as it indicates that I need to update the version of Pupil Player, but I cannot find the installation package for the latest version.
Hi @user-d714ca ππ½ ! I've replied to your message here: https://discord.com/channels/285728493612957698/1047111711230009405/1156480473401401404
Good morning, we are going to buy a computer for our experiments. Do you consider anything else necessary in addition to the following specifications? Apple Silicon (M1 or M2), 16 GB RAM, Intel i7 CPU. Thank you so much!
Hi @user-4514c3 ππ½ ! Apple's silicon chips (like the M1 Air) work great with Pupil Core. See also this relevant message https://discord.com/channels/285728493612957698/285728493612957698/1097792741188042863
Hi does anyone have any idea how to calculate fixations from gazes (i.e formula) any guides
Hi @user-eb6164 ! Please see my previous message below about the fixation detector algorithms employed in Core https://discord.com/channels/285728493612957698/285728493612957698/1153961497702191104
The algorithm code is open source and you can find it here:
https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py
oh yes i saw it but I thought there is any formula that makes me understand why the fixations are computed this way from gazes using the algorithm.
Ohh! I see, well it is an implementation of a dispersion-based method. Perhaps this would help you:
Dario D. Salvucci and Joseph H. Goldberg. 2000. Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 symposium on Eye tracking research & applications (ETRA '00). Association for Computing Machinery, New York, NY, USA, 71β78. https://doi.org/10.1145/355017.355028 https://dl.acm.org/doi/pdf/10.1145/355017.355028
Thank you!
Hello, I am trying to integrate labstreaming layer with pupil core. I placed the necessary folders from github in my pupil capture plugins. But, My plugin manager is not giving me LSL as an option. I am using a mac. How do I get pupil capture to recognize?
Hey, @user-63b5b0 - I see a couple of potential issues here
App-PupilLabs
folder into your plugins
folder, but you just want the pupil_capture_lsl_relay
folder, which is two levels deeper. Move that folder up two levelspylsl
folder is nested inside another pylsl
folder, making it plugins/pylsl/pylsl
. Move this one one level uppylsl
is missing its liblsl
binary which, on Mac and Linux, has to be acquired separately (see the note here: https://github.com/labstreaminglayer/pylsl#liblsl-loading). You can download the Mac binaries for liblsl
from their github page (https://github.com/sccn/liblsl/releases). From that archive, take the contents of the lib folder and extract them to plugins/pylsl/lib
Hi, can I just check what the units in the gaze detection refers to? for example we have this data
(-0.7101088235564816, -108.72276477823718, 442.24575165054046) (148.14312675270114, 34.23232986872651, -13.173346733526909)
which corresponds to the gaze_point_3d and the eye_center_3d, what is the origin point and what units are the readings measured in?
Hi, @user-b85e9c - welcome! You'll want to review the documentation here https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter and here https://docs.pupil-labs.com/core/terminology/#coordinate-system for these questions and more information