Skip to content

Conversation

@chenghao-mou
Copy link
Member

This allow us to trigger STT tests with a /test-stt comment on PRs.

This requires us to verify that there is no malicious code (e.g. API exfiltration) before triggering it.

@chenghao-mou chenghao-mou requested a review from a team January 6, 2026 17:27
@github-actions
Copy link
Contributor

github-actions bot commented Jan 6, 2026

STT Test Results

Status: ✗ Some tests failed

Metric Count
✓ Passed 23
✗ Failed 0
× Errors 1
→ Skipped 15
▣ Total 39
⏱ Duration 218.4s
Failed Tests
  • tests.test_stt::test_stream[livekit.plugins.aws]
    def finalizer() -> None:
            """Yield again, to finalize."""
      
            async def async_finalizer() -> None:
                try:
                    await gen_obj.__anext__()  # type: ignore[union-attr]
                except StopAsyncIteration:
                    pass
                else:
                    msg = "Async generator fixture didn't stop."
                    msg += "Yield only once."
                    raise ValueError(msg)
      
            task = _create_task_in_context(event_loop, async_finalizer(), context)
    >       event_loop.run_until_complete(task)
    
    .venv/lib/python3.12/site-packages/pytest_asyncio/plugin.py:347: 
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    
    self = <_UnixSelectorEventLoop running=False closed=True debug=False>
    future = <Task finished name='Task-124' coro=<_wrap_asyncgen_fixture.<locals>._asyncgen_fixture_wrapper.<locals>.finalizer.<loc... File "/home/runner/work/agents/agents/.venv/lib/python3.12/site-packages/smithy_http/aio/crt.py", line 104, in chunks>
    
        def run_until_complete(self, future):
            """Run until the Future is done.
      
            If the argument is a coroutine, it is wrapped in a Task.
      
            WARNING: It would be disastrous to call run_until_complete()
            with the same coroutine twice -- it would wrap it in two
            different Tasks and that can't be good.
      
            Return the Future's result, or raise its exception.
            """
            self._check_closed()
            self._check_running()
      
            new_task = not futures.isfuture(future)
            future = tasks.ensure_future(future, loop=self)
            if new_task:
                # An exception is raised if the future didn't complete, so there
                # is no need to log the "destroy pending task" message
                future._log_destroy_pending = False
      
            future.add_done_callback(_run_until_complete_cb)
            try:
                self.run_forever()
            except:
                if new_task and future.done() and not future.canc
    
Skipped Tests
Test Reason
tests.test_stt::test_recognize[livekit.plugins.assemblyai] universal-streaming-english@AssemblyAI does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.speechmatics] unknown@Speechmatics does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.fireworksai] unknown@FireworksAI does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.nvidia] unknown@unknown does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.aws] unknown@Amazon Transcribe does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.cartesia] ink-whisper@Cartesia does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.soniox] stt-rt-v3@Soniox does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.deepgram.STTv2] flux-general-en@Deepgram does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.gradium.STT] unknown@Gradium does not support batch recognition
tests.test_stt::test_recognize[livekit.agents.inference] unknown@livekit does not support batch recognition
tests.test_stt::test_recognize[livekit.plugins.azure] unknown@Azure STT does not support batch recognition
tests.test_stt::test_stream[livekit.plugins.elevenlabs] Scribe@ElevenLabs does not support streaming
tests.test_stt::test_stream[livekit.plugins.mistralai] voxtral-mini-latest@MistralAI does not support streaming
tests.test_stt::test_stream[livekit.plugins.openai] gpt-4o-mini-transcribe@api.openai.com does not support streaming
tests.test_stt::test_stream[livekit.plugins.fal] Wizper@Fal does not support streaming

Triggered by workflow run #33

@chenghao-mou chenghao-mou merged commit da8941d into main Jan 7, 2026
19 of 21 checks passed
@chenghao-mou chenghao-mou deleted the fix/slash-trigger branch January 7, 2026 19:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants