Hi, I'm a software developer that has been looking into integrating Unity3D + Pupil Capture/Service into my product. I am looking to install an android build of a unity project with Pupil onto a eye tracking headset to create an untethered experience. My question is on whether Pupil has a Unity SDK and whether or not Pupil would be compatible with an Android build target in order to build out to Android devices. Any help is greatly appreciated!
Hi, I'm trying to getting datas from gaze and I'm using C# because I would use some AR and rendering stuff so I probably use Unity. I'm stuck on some problems and cant get any gaze data, 27 line is null so receivemultipart message give me null, I want to check if it is connect or subscribed succesfully but I dont know how to do, couldnt find it, please could you help me?
Hey @user-2a0b36! Check out hmd-eyes - this repo contains the building blocks to implement Pupil with Unity3D: https://github.com/pupil-labs/hmd-eyes. Note that Pupil Core software is not compatible with an Android build target
Thanks for the response. Does that mean it would require a tethered experience?
I also have one additional question. Is it possible to create a calibration routine without looking at a known target (i.e., βGary, I need you to look up, now down, now left, now right.β)?
Did you already check that you can subscribe to the gaze stream with our Python example (https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py) ?
I didnt see this example, after couple of try still gettin null, I tried substoanytopic method and still receivestring gave me null, I can get subport so req response is working but SubPub isnt, do I need to make some settings?
Yes, it would be a tethered experience. We do have a demo calibration choreography in the hmd-eyes repo. You can modify its parameters: https://github.com/pupil-labs/hmd-eyes/blob/master/docs/Developer.md#calibration
I appreciate the quick responses. Thank you @nmt. You were a great help
Can you confirm whether the Python example works?
I will check it tomorrow, because of timezone, my response take time sorry for that
Hello everybody, In relation to the transformation of the scan path surface coordinates to pixel coordinates of the image. https://github.com/pupil-labs/pupil-tutorials/blob/master/03_visualize_scan_path_on_surface.ipynb
Is there an example to do the same thing but for the coordinates of both eyes separately?
We don't have an existing solution for this I'm afraid
Thanks in advance!
Ok, it coulld be somehow important in the research we want to do. I will give some updates in the solution we make. I was thinking as a simple solution to somehow combine the surface_gaze table with the normal gaze table, taking as reference the world index. I need to study more this, beacause it might be completele wrong. Other solutions might be more complex and require more time. Anyway thank you very much.
The Surface Tracker transforms '2d gaze' from scene camera to surface coordinates. The problem with gaze normals are they are in 3d camera space β we find the 2d gaze coordinates by projecting the nearest intersection of lines in 3d space (i.e. gaze normals; the line going through the eyeball centre and the object that's looked at) back onto the 2d scene camera plane. What might help you is this dual monocular gazer: https://gist.github.com/papr/5e1f0fc9ef464691588b3f3e0e95f350. Off the top of my head, I'm not sure how the gaze coordinates for each eye are stored, however. Worth taking a look.
Thank you. I will take a look.
This a graphical representation of what I describe:
Python version has same problem, req-res working but subs not working. "topic = sub.recv_string()" line is gave no error and break the program after this line, even while still true program still working but gave me no result. -Sorry for the picture but my research center has restrict on computers so I could only take pictures.- @nmt
PS. "pyzmq version 25.0.0, msgpack 1.0.5(the documentation version is deprecated), python version 3.9"
First thing to note is that you'll need to wear and calibrate Pupil Core before obtaining gaze data. It's also worth mentioning that you either need to run the script on the same computer as Pupil Capture, or the computer must have a network connection to Capture that isn't blocked by a firewall.
I see, thank you for your help mate, I will test these things, have a nice day
Aside from your hardware, have folks adapted this to work with [albeit awkward] smartphone cameras?
Hi @user-9c3ec3. Core software is compatible with third-party cameras that are UVC compliant. Check out this Discord message for details: https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498. You might also be interested in Pupil DIY: https://docs.pupil-labs.com/core/diy/#diy Difficult to say whether you'd have success with the vast array of smartphone cameras π€
Hi, I was wondering if there's a way of getting the data for both eyes after using the dual monocular gazer plugin and later defining a surface? It seems like the surface plugin gives back the data of both eyes merged again and not as independent ones.
Yes, the surface tracker will do that. You'd essentially need to modify it to transform the dual gaze estimates into surface space
Good evening, the camera turns on, what should I do? Maybe what driver is needed?
Please share the pupil capture.log file. Search on your machine, 'pupil_capture_settings'. Inside that folder is the .log file
camera r200
Please help me set up my camera
Realsense
Hello! I`m student and i want to use your program in my academic year project. is this any chance i can use your program with default camera in my laptop or usb-camera instead of your headset? I am trying to do this but i get this error. Thank you.
Question: What is the scope of your project? Are you building your own wearable eye tracker with your own eye and scene cameras?
You can use other UVC compliant cameras. If they are UVC compliant they should automatically appear in the Activate Camera
drop down.
Thats exactly what i m trying to do in my project. I can see my camera in the Activate, but every time when i try to choose it, i get an error "this camera is already used or blocked"
Custom camera with Pupil Capture
Hello,
How is the heatmap calculated from the surface data? Is there a formula to it? Is there code I can access?
Hi @user-50567f π. You can see the source code here: https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/surface_tracker/surface.py#L609
Hello, I have an issue running the script https://github.com/pupil-labs/pupil-helpers/blob/master/python/filter_messages.py with all message After the message frame.eye.0 I receive the following error: frame.eye.0: {'topic': 'frame.eye.0', 'width': 192, 'height': 192, 'index': 112322, 'timestamp': 2725.910855, 'format': 'bgr'} Traceback (most recent call last): File "/home/mathieu/pupilabs/test_reception.py", line 35, in <module> topic = sub.recv_string() File "/home/mathieu/pupilabs/lib/python3.10/site-packages/zmq/sugar/socket.py", line 927, in recv_string return self._deserialize(msg, lambda buf: buf.decode(encoding)) File "/home/mathieu/pupilabs/lib/python3.10/site-packages/zmq/sugar/socket.py", line 826, in _deserialize return load(recvd) File "/home/mathieu/pupilabs/lib/python3.10/site-packages/zmq/sugar/socket.py", line 927, in <lambda> return self._deserialize(msg, lambda buf: buf.decode(encoding)) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 34581: invalid start byte I think it is related to the image data Other messages are well reveived I am usind the dev branch zmq == '25.0.2' msgpack=1.0.0
To receive frames, please use the receive frames helper script: https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py
Ok thanks. My purpose was to explore all the data send over the network, that why this bug was frustrating
You can of course receive the topic name with the filter message script, as it's a string. But the payload is different and would need handling as such. See https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py#L49 and https://github.com/pupil-labs/pupil-helpers/blob/master/python/recv_world_video_frames.py#L91
Hi, I am trying to use the data from the head pose tracker with the msg send by the network API but the data I receive make no sense to me. Looking at the Head Pose tracker visualizer the result looks great but I can't manage to have the correct sign along the different axis with the network API. I am extracting the translation from the camera pose matrix with functions from ROS. I am missing something ?? edit Seams that I have a least manage a coherent result once but with some lag
Hi @user-91a92d π. I'm not sure I fully understand your question. Can you elaborate a bit on what your output is? What doesn't look right etc.?
I was looking at the sign the x,y,z value and they did not match what I was observing in the visualizer. Without changing it seams to be alright expect a delay 0,5s 1s.
Hello everyone, can I know how to map gaze data on dynamic AOIβs ?
Hi @user-00cc6a, if by dynamic AOIs you mean for example scrolling down a website in your laptop you can do that using the Apriltags and the Surface Tracker plugin - you'd just need to put the markers at the corners of your laptop screen
@nmt ??