full research article
coming soon

rabbits use LAM to carry out human intentions in user interfaces.

LAM does not live in a vacuum. In order for rabbit OS to efficiently leverage the power of LAM to execute tasks on behalf of users on dedicated hardware, we designed new platforms to schedule and manage rabbits. In addition, rabbits interact with applications designed for humans. This means that knowing only how to perform an objective is not sufficient: LAM and their peripheral software need to know how to do it in a humanizing, respectful way. This is further broken down to several key elements, in addition to our core model development, to support our products:

  • Cloud infrastructure to spin up environments for such AI to be able to behave like a human because most software requires its user to behave like a human, however they define and determine it. We have built a special cluster of virtualized environments that can run LAM on consumer applications, whether in testing or production. It provides an advanced level of security and scalability, enabling us to rapidly prototype our foundation model research.
  • Hardware-software programming interfaces to deliver the multimedia experience of AI-human cooperation because additive bundling of existing protocols is very poorly optimized. We have created our own optimizations for protocols used in multimedia interactions between our users and our operating system, as well as Virtual Network Computing (VNC) protocols for both users assisting rabbits in performing sensitive operations like authentication or payment, and for rabbits learning from user demonstrations.
  • A unified standard to test, observe, and iterate the AI along with the product because LAM needs a "gym" to continue to learn and adapt and requires observations from product usage and external assistance. Our unique formalization of web, desktop, and mobile application structures, along with the actions performed on them, has enabled us to effectively utilize internet-scale scraped data and human feedback to train our models, all while requiring relatively low computational power.

Through this process, LAM will be put on a guardrail to produce behaviors that are safe, efficient, and indistinguishable from human behavior, making them a comfortable choice to delegate user interactions to.