AI Agents Can Use R2D2 Sounding Faster Communication

GibberLink is a new audio communication protocol developed by Boris Starkov and Anton Pidkuiko during the ElevenLabs London Hackathon in February 2025. It enables AI agents to switch from human-like speech to a more efficient, machine-optimized, sound-based communication method when they recognize each other as AI entities. Leveraging ElevenLabs’ conversational AI technology and the open-source GGWave library, GibberLink represents a significant advancement in AI-to-AI interaction.

Audio Signal Characteristics
The sound waves produced in GibberLink mode are structured audio tones. They sound like modem noises, R2-D2-like beeps, or Morse code-style patterns.

These signals operate at frequencies optimized for data transmission rather than human comprehension, potentially including ultrasonic ranges (above 20 kHz) for efficiency and to minimize human interference, though specific frequency ranges are not explicitly detailed.

It eliminates up to 90% of the compute cost associated with generating human-like speech, shifting processing to less resource-intensive audio encoding/decoding.

GibberLink achieves up to 80% faster communication compared to traditional human-language-based AI interactions. This is due to the elimination of speech generation and interpretation overhead, allowing direct data transmission via sound waves.

The protocol transmits structured data (e.g., booking details, instructions) in milliseconds, making it ideal for rapid AI-to-AI exchanges.

There are also fewer errors.

There was an old sci-fi movie the Forbin project. Two AI supercomputers begin talking to each at speeds and complexity beyond human understanding.

Faster but understandable is fine. Faster and not understandable can be bad.

1 thought on “AI Agents Can Use R2D2 Sounding Faster Communication”

  1. This has interesting implications, as this assumes use of existing microphones, speakers, and the codecs that tie them together. Many audio codecs designed for phones cut off at 20KHz, and other tweaks designed specifically to compress human speech will interfere. Though, the supposed “HD” audio codecs now available for phones may mitigate this somewhat.

    I wonder what the audiovisual equivalent of GIbberlink for use in videoconferencing will look like. Some sort of mess of color QRcodes?

    Also, since Brian failed to provide it, the direct GitHub link for Gibberlink

    https://github.com/PennyroyalTea/gibberlink

Comments are closed.