Razer is known for often realising their "projects" and we can only hope that applies to Project Motoko.
Project Motoko is "The future of wearable AI" and features a headset that functions via a blend of AI and AR, in the form of a headset.
It comes with "dual FPV" cameras, meaning it covers the full first-person view, and relays the data in real-time, mostly from drones, to create a "being there" experience. This is meant to focus the AI platform on what the user can see and experience, ignoring all irrelevant data and input. The use of two cameras also allow for depth perception, and precise recognition of positioning in the room and relative to the user. Razer does claim that the cameras also make use of an extended field of vision.
Audio wise, both close range and long range audio is taken into consideration as to capture not only your voice, but also sound and speech from almost anything within the field of view of the FPV cameras.
The whole point is for interaction between camera, audio input and the physical room and objects, with Razer outright stating that it is a "personal full-time AI assistant". The Razer product presentation integrates chat and livestreaming, giving the impression that it's a product for content creators, it does also demonstrate how it can be used for social interactions and workout, which we do find slightly more staged and does provide a few issues of privacy of those being filmed.
Project Motoko currently supports most widely used AI platforms, and even uses machine learning that is used for, well, "providing robotics teams with high-value data to train humanoids for more natural perception, movement, and decision-making, which could indicate that Razer is gathering huge amounts of data for... something humanoid.