Jump to section
- ChallengePulling insight from a sprawl of unstructured data
- SolutionPinecone makes scaling search and adding features seamless
- ResultBetter, faster insights
Stravito helps large global enterprises unlock customer insights by simplifying how market research data is stored, discovered, and shared. Its knowledge management solution enables teams to make informed decisions faster by breaking down data silos and delivering seamless access to critical information that can be applied to drive innovation and business growth. Pinecone powers the semantic search for the company’s conversational interface, Stravito Assistant which is built using generative AI. With Assistant, Stravito customers can find and integrate actionable insights seamlessly across their datasets and workflows.
Use Cases
Semantic search, AI Assistant
Cloud
AWS
It's not always obvious what sets a great product or service apart from one that's simply good. The best ones just seem to fit—as if they naturally understand exactly what their users need. But that doesn't happen by accident. It takes deep insights into preferences, routines, and needs that users themselves might not even be able to articulate.
Organizing and accessing the research behind this intelligence can be complex and time-consuming without the right tools. For customers worldwide, Stravito's platform helps simplify the process for product creators and marketers who need to store, discover, and integrate consumer insights into their work.
Now, with Stravito Assistant, which is enabled by the semantic search features provided by Pinecone, the platform’s users can find answers faster, ask more targeted questions, and get immediate access to data that can inform business critical decisions and drive innovation.
Pulling insight from a sprawl of unstructured data
Market and consumer insights are vast, varied, and often scattered across multiple formats and sources. Trend reports, survey results, and data from external service providers are frequently locked away in unstructured documents and incompatible formats. This makes it harder for marketing teams to discover, access and integrate the insights they need.
Stravito has already solved that problem. By automatically categorizing data using machine learning and offering a natural language query interface, Stravito breaks the silos between data. Teams can find, use, and collaborate on insights without needing to consider how or where they’re stored.
But conversational AI showed that there was an opportunity to do more. A conversational interface could bring together insights no matter where they appeared and provide focused answers to specific questions, or prompts to dive deeper into topics of interest and explore them further.
“We realized early on that we needed a semantic search solution capable of handling both the volume and variety of our customers' data. Pinecone’s infrastructure gave us the flexibility to meet that challenge without overburdening our engineering team.” Viktor Karlsson, Software Engineer, Stravito
RAG and limited context windows
Stravito's initial experiments began in 2022 with a RAG pipeline that enhanced LLM responses with customer data. However, the team quickly hit a roadblock.
The LLMs available at that time offered relatively limited context windows. That made it harder to feed in enough relevant data for the model to provide the insights Stravito’s customers needed. The team knew the answer was to enhance their RAG pipeline with dedicated semantic search capabilities, and so they evaluated their options.
In looking for semantic search functionality, the Stravito engineering team first looked to their existing technology stack. While their Elasticsearch instance supported semantic search, optimizing it for their growing needs required significant effort. As a result, the team decided to explore vector databases designed specifically for these workloads.
Pinecone makes scaling search and adding features seamless
The Stravito team began their search with clear priorities. They needed a vector database that provided:
- Tenant isolation: The vector database would store data from multiple Stravito customers. Guaranteed separation between customers would be essential.
- Dynamic filtering: Providing the right insights is part of the Stravito promise. Metadata filtering would be important to focus vector search results. Similarly, the vector database would need to help enforce access controls, ensuring that users could see only the data they were authorized to access.
- An easy scaling model: With a growing customer base meaning ever-expanding datasets, Stravito needed a solution that could scale without increasing the workload of their DevOps team.
Many of the vector databases they evaluated checked one or two of these boxes. But only Pinecone offered the effortless scalability they needed without pulling focus from solving end-user problems.
Better, faster insights
Since launching Stravito Assistant, the company has empowered global customers to access insights faster and integrate them seamlessly into their workflows. Pinecone’s vector database has been instrumental in enabling Stravito to handle the scale, performance, and complexity required to deliver these capabilities.
Here’s how Pinecone has supported the Stravito team in delivering Stravito Assistant:
- Effortless scalability: Pinecone’s serverless offering scales automatically to handle Stravito’s growing workloads, maintaining performance without additional effort from the DevOps team.
- Operational efficiency: By reducing the need to manage self-hosted infrastructure, Pinecone allows Stravito’s engineers to focus on solving end user problems, helping them get value to market quicker and at a lower cost.
- Faster insights: Backed by Pinecone’s vector search, Stravito Assistant delivers quick, accurate data retrieval and synthesis, helping users navigate complex datasets and make decisions more effectively.
Improved performance: Using Pinecone’s hosted embedding and reranking models has both improved recall and made it simpler for Stravito to connect other parts of their stack to the search services.
“We evaluated our semantic search system’s recall performance both with and without the Pinecone reranker and saw substantial improvements. Using a reranker has from that point onwards been a no-brainer. By centralizing our vector database interactions to one system, it will be easier for other teams to adopt this technology too.” Viktor Karlsson, Software Engineer, Stravito
In the near future, Viktor and his colleagues plan to further streamline their tech stack by migrating embedding and reranking workloads from self-hosted AWS instances to Pinecone’s inference endpoints. This move will reduce overhead, cut costs, and simplify vector database workflows by removing the dependency on self-hosted infrastructure. With Pinecone, the Stravito team has delivered a scalable solution that helps their customers extract more value from their data, all while minimizing the impact on their engineering resources.