DroidSpeak: Enhancing Cross-LLM Communication

(arxiv.org)

15 points | by jonbaer 4 days ago

2 comments

  • ckrapu 3 days ago

    From an AI risk perspective, one of the most wonderful things about LLMs is that their chain of thought can be entirely read off their outputs by humans with no specific training.

    This is a risky step backwards, and for apparently little gain.

    • rotexo 2 days ago

      Can the E-cache and KV-cache be supplied to the model to produce the natural language output that would have been fed into the next model of the chain, if it were not for DroidSpeak? If so, doesn’t seem to materially change the explainability of the system.