A decorator-based Python framework. Vectorize function calls, store them in Weaviate, and get semantic caching, self-healing, and drift detection.
Trace and cache your functions with a single decorator. Supports both sync and async.
from vectorwave import vectorize, initialize_database
initialize_database()
@vectorize(
semantic_cache=True,
cache_threshold=0.95,
capture_return_value=True,
team="ml-team"
)
async def generate_response(query: str):
return await llm.complete(query)
# First call: executes normally, stores in Weaviate
result = await generate_response("explain transformers")
# Similar query: returns cached result in ~0.02s
result = await generate_response("what are transformers?")Everything you need for AI function observability.
pip install vectorwave and start AI function observability in minutes.