Skip to main content
Ctrl+K
🦜🔗 LangChain  documentation - Home 🦜🔗 LangChain  documentation - Home
  • Reference
Ctrl+K
Docs
OSZAR »
  • GitHub
  • X / Twitter
Ctrl+K
  • Reference
Docs
OSZAR »
  • GitHub
  • X / Twitter

Section Navigation

Base packages

  • Core
  • Langchain
  • Text Splitters
  • Community
    • adapters
    • agent_toolkits
    • agents
    • cache
    • callbacks
    • chains
    • chat_loaders
    • chat_message_histories
    • chat_models
    • cross_encoders
    • docstore
    • document_compressors
    • document_loaders
    • document_transformers
    • embeddings
    • example_selectors
    • graph_vectorstores
    • graphs
    • indexes
    • llms
      • AI21
      • AI21PenaltyData
      • AlephAlpha
      • AmazonAPIGateway
      • ContentHandlerAmazonAPIGateway
      • Anyscale
      • Aphrodite
      • Arcee
      • Aviary
      • AviaryBackend
      • AzureMLBaseEndpoint
      • AzureMLEndpointApiType
      • AzureMLEndpointClient
      • AzureMLOnlineEndpoint
      • ContentFormatterBase
      • CustomOpenAIContentFormatter
      • DollyContentFormatter
      • GPT2ContentFormatter
      • HFContentFormatter
      • LlamaContentFormatter
      • OSSContentFormatter
      • BaichuanLLM
      • QianfanLLMEndpoint
      • Banana
      • Baseten
      • Beam
      • BedrockBase
      • LLMInputOutputAdapter
      • BigdlLLM
      • NIBittensorLLM
      • CerebriumAI
      • ChatGLM
      • ChatGLM3
      • Clarifai
      • CloudflareWorkersAI
      • CTransformers
      • CTranslate2
      • DeepInfra
      • DeepSparse
      • EdenAI
      • ExLlamaV2
      • FakeListLLM
      • FakeStreamingListLLM
      • ForefrontAI
      • BaseFriendli
      • Friendli
      • GigaChat
      • GooseAI
      • GPT4All
      • GradientLLM
      • TrainResult
      • HumanInputLLM
      • IpexLLM
      • JavelinAIGateway
      • Params
      • KoboldApiLLM
      • Konko
      • LayerupSecurity
      • LlamaCpp
      • Llamafile
      • ManifestWrapper
      • Minimax
      • MinimaxCommon
      • Mlflow
      • MlflowAIGateway
      • Params
      • MLXPipeline
      • Modal
      • Moonshot
      • MoonshotCommon
      • MosaicML
      • NLPCloud
      • BaseOCIModelDeployment
      • OCIModelDeploymentLLM
      • OCIModelDeploymentTGI
      • OCIModelDeploymentVLLM
      • ServerError
      • TokenExpiredError
      • CohereProvider
      • MetaProvider
      • OCIAuthType
      • OCIGenAI
      • OCIGenAIBase
      • Provider
      • OctoAIEndpoint
      • OllamaEndpointNotFoundError
      • OpaquePrompts
      • BaseOpenAI
      • OpenLLM
      • OpenLM
      • Outlines
      • PaiEasEndpoint
      • Petals
      • PipelineAI
      • Predibase
      • PromptLayerOpenAI
      • PromptLayerOpenAIChat
      • Replicate
      • RWKV
      • ContentHandlerBase
      • LLMContentHandler
      • LineIterator
      • SambaNovaCloud
      • SambaStudio
      • SelfHostedPipeline
      • SelfHostedHuggingFaceLLM
      • Solar
      • SolarCommon
      • SparkLLM
      • StochasticAI
      • Nebula
      • TextGen
      • Device
      • ReaderConfig
      • TitanTakeoff
      • Tongyi
      • VLLM
      • VLLMOpenAI
      • VolcEngineMaasBase
      • VolcEngineMaasLLM
      • WeightOnlyQuantPipeline
      • Writer
      • Xinference
      • YandexGPT
      • YiLLM
      • You
      • Yuan2
      • create_llm_result
      • update_token_usage
      • get_completions
      • get_models
      • acompletion_with_retry
      • completion_with_retry
      • get_default_api_token
      • get_default_host
      • get_repl_context
      • acompletion_with_retry
      • acompletion_with_retry_batching
      • acompletion_with_retry_streaming
      • completion_with_retry
      • completion_with_retry_batching
      • conditional_decorator
      • completion_with_retry
      • clean_url
      • default_guardrail_violation_handler
      • load_llm
      • load_llm_from_config
      • acompletion_with_retry
      • completion_with_retry
      • update_token_usage
      • completion_with_retry
      • make_request
      • agenerate_with_last_element_mark
      • astream_generate_with_retry
      • check_response
      • generate_with_last_element_mark
      • generate_with_retry
      • stream_generate_with_retry
      • enforce_stop_tokens
      • acompletion_with_retry
      • completion_with_retry
      • is_codey_model
      • is_gemini_model
      • acompletion_with_retry
      • completion_with_retry
      • Anthropic
      • Bedrock
      • BaseCohere
      • Cohere
      • Databricks
      • Fireworks
      • GooglePalm
      • HuggingFaceEndpoint
      • HuggingFaceHub
      • HuggingFacePipeline
      • HuggingFaceTextGenInference
      • Ollama
      • AzureOpenAI
      • OpenAI
      • OpenAIChat
      • PredictionGuard
      • SagemakerEndpoint
      • Together
      • VertexAI
      • VertexAIModelGarden
      • WatsonxLLM
    • memory
    • output_parsers
    • query_constructors
    • retrievers
    • storage
    • tools
    • utilities
    • utils
    • vectorstores
  • Experimental

Integrations

  • AI21
  • Anthropic
  • AstraDB
  • AWS
  • Azure Ai
  • Azure Dynamic Sessions
  • Cerebras
  • Chroma
  • Cohere
  • Deepseek
  • Elasticsearch
  • Exa
  • Fireworks
  • Google Community
  • Google GenAI
  • Google VertexAI
  • Groq
  • Huggingface
  • IBM
  • Milvus
  • MistralAI
  • MongoDB
  • Neo4J
  • Nomic
  • Nvidia Ai Endpoints
  • Ollama
  • OpenAI
  • Perplexity
  • Pinecone
  • Postgres
  • Prompty
  • Qdrant
  • Redis
  • Sema4
  • Snowflake
  • Sqlserver
  • Standard Tests
  • Tavily
  • Together
  • Unstructured
  • Upstage
  • VoyageAI
  • Weaviate
  • XAI
  • LangChain Python API Reference
  • langchain-community: 0.3.23
  • llms
  • acompletion_with_retry_streaming

acompletion_with_retry_streaming#

async langchain_community.llms.fireworks.acompletion_with_retry_streaming(
llm: Fireworks,
use_retry: bool,
*,
run_manager: AsyncCallbackManagerForLLMRun | None = None,
**kwargs: Any,
) → Any[source]#

Use tenacity to retry the completion call for streaming.

Parameters:
  • llm (Fireworks)

  • use_retry (bool)

  • run_manager (AsyncCallbackManagerForLLMRun | None)

  • kwargs (Any)

Return type:

Any

On this page
  • acompletion_with_retry_streaming()

© Copyright 2025, LangChain Inc.

OSZAR »