Hi everyone! Iβve collected some data, but I noticed that the calibration doesnβt always yield good results for everyone. Even when the participant isnβt wearing makeup or glasses, it sometimes doesnβt work well. Is there a way to improve this? Could it be that the quality of the calibration depends on individual factors, such as the personβs head size or other characteristics? Any insights would be greatly appreciated!
(I'm using pupil labs inside a vive cosmos elite)
Hi @user-290cc0! I've moved your message to the appropriate channel. Could you share an example recording with [email removed] such that we can provide more concrete feedback?
Hi, I recently collected data using the core-xr on my HTC Vive and was exploring Pupil Player until a colleague recommended Pupil Cloud for analysis. I am seeking clarification and support on two things: 1. Can I use the Screen Recording data (which includes a bunch of .npy and other format files) from Pupil Capture and analyze it on Pupil Cloud. 2. How do I upload these files on Pupil Cloud ? I am unable to see an option on the web UI.
Hi @user-d5c9ea! Recordings made with the core xr equipment can't be uploaded to Pupil Cloud. Only Neon and Pupil Invisible recordings are compatible with Pupil Cloud. What analysis are you trying to perform in particular?