Frequently Asked Questions
Use this knowledge base to configure, tune, and operate the Solverox TS AI Agent without waiting on a support ticket.
If you need tailored help, email [email protected] and include your tenant ID.
How can I start using the AI Agent fast?
This quick-start checklist gets your tenant live without delays:
-
Access the admin UI.
- Atlassian Marketplace: From the avatar menu select Settings → Apps → Marketplace applications. Only Jira Admins can see and use the Solverox TS AI Agent UI.
- Paddle: Your URL, username, and password were emailed when you subscribed. Sign in with those credentials.
- Choose the correct region. The Main page prompts you to pick a region (some tenants are fixed to EU). Confirm the region aligns with your data residency requirements.
- Complete the Settings tabs except the JQL field. Go tab by tab and provide every required value, but leave "Please provide the JQL to select which Work Items to respond to" blank for now. Click Save on each tab before navigating away.
- Switch the application on. Return to the Main page and toggle the application state to ON.
-
Ingest your documentation first. Open Settings → KB Details and start your first data ingestion.
- Once ingestion finishes, the AI Agent can answer web users with the URLs you supplied.
- Jira ingestion runs nightly around 00:00 GMT. Large Jira spaces may take additional time on the first run.
- After the Jira run completes, the agent combines both data sources in responses and explanations.
- Add the JQL filter after Jira ingestion completes. Usually the next day, visit Settings → Jira Items to Work On and fill in "Please provide the JQL to select which Work Items to respond to." Waiting until Jira data is indexed ensures the responses within Jira work items meet quality standards.
- Embed the chat widget. Publish the widget on the relevant domains (see chat widget instructions) so end users can interact with the AI Agent immediately after ingestion.
What are the most effective ways to build and maintain the knowledge base?
AI performance mirrors the quality of the source material. Keep the corpus accurate, structured, and easy to classify:
- Remove repetition and outdated docs. Feeding stale or duplicated pages inflates chunk counts, risks hitting limits, and degrades the ranking of the actual source of truth.
- Structure web documentation properly. Clean
<head>metadata, descriptive<title>tags, and semantic headings help the crawler associate pages with the right topics. - Use searchable PDFs. Text-based, well-bookmarked PDFs make it easier for the ingestion pipeline to segment content into relevant snippets.
- Publish precise API specs. Provide OpenAPI v3 (YAML/JSON) definitions so the agent can surface detailed request/response information.
- Curate Jira work items. Maintain clear questions, answers, and resolution notes so the AI avoids noisy comment chains or ambiguous statuses.
If any of these inputs are sub-optimal, the Solverox prioritization algorithms still work to resolve complex questions, but thoughtful content hygiene always unlocks the best possible answers.
How should I configure Jira ingestion filters (Work Items to Include in KB)?
Tailor the Jira ingestion scope with JQL so the agent learns only from relevant, up-to-date records:
- All closed work items in a project:
project = SUPPORT AND status = "closed" - Closed work items newer than a year:
status = "closed" AND createdDate >= -365d— a practical way to avoid loading outdated content. - Closed work tied to specific products:
project = SUPPORT AND status = "closed" AND (components = "Product 1" OR components = "Product 2")
Adjust each filter to reflect your product life cycle and compliance requirements before launching the nightly ingestion.
How do I limit which Jira issues the AI Agent actively works on?
Use JQL filters to focus the automation on the highest-impact work items:
- All tickets waiting on support:
status = "Waiting for support" - Specific project and status:
project = SUPPORT AND status = "Waiting for support" - Named customers or organizations:
project = SUPPORT AND status = "Waiting for L1 Support" AND (Organizations = "4E INVITE" OR Organizations = "Abu Dhabi Media") - Email-only intake:
project = SUPPORT AND "Request Type" = "Emailed request (SP01)" - Targeted geographies:
project = SUPPORT AND status = "Waiting for support" AND "customfield_geography[Dropdown]" = "MEA" - Assigned owners:
project = SUPPORT AND status = "Waiting for support" AND assignee = "Your Agent Name" - Premium support tiers:
project = SUPPORT AND status = "Waiting for support" AND "Support Type[Short text]" ~ "Elite SLA" - Exclude specific products:
project = SUPPORT AND status = "Waiting for support" AND component NOT IN ("Product 1", "Product 2")
How can I refresh only specific URLs in my knowledge base?
Submit the refresh job with the narrowest possible scope so only the targeted content is re-crawled. For example:
- URL:
https://docs.solverox.com/TSAIAgent/|TSAIAgent|3.1.2 - Depth: 15
- Max Pages: 500
- Same Path / Same Domain: Enabled
As long as the depth and max pages thresholds are not exceeded, every page under that path is refreshed and automatically tagged with the correct product and version metadata.
How can I delete specific URLs from my knowledge base?
Use the same scoping strategy as refresh jobs, but select the delete action:
- URL:
https://docs.solverox.com/TSAIAgent/|TSAIAgent|3.1.2 - Depth: 15
- Max Pages: 500
- Same Path / Same Domain: Enabled
Everything under that URL path is removed from the knowledge base when the job finishes.
Are there specific timelines for data ingestions or deletions?
Document URLs can be ingested or deleted at any time, and you can monitor the progress in the admin UI. Jira work items follow a schedule: ingestion starts nightly around midnight GMT, depending on Jira triggers and TSAI container workload. The first run can take significantly longer because it processes the full backlog; subsequent runs only process deltas.
How long does ingestion take?
Actual times vary with page structure, OCR needs, and container utilization, but typical reference points are:
- 1000 web pages (no OCR) ingest in about 25 minutes.
- 1000 pages produce roughly 1800 usable chunks.
Use these values for planning purposes and monitor your own ingestion dashboard for real-time throughput.
What should I consider when providing URLs for the knowledge base?
Most customers version their docs via product/version paths. Configure each entry with the target URL, product label, and version, for example:
https://docs.solverox.com/TSAIAgent/v3.2.7/|TSAIAgent|3.2.7https://docs.solverox.com/EasyReports/v1.0.9/|EasyReports|1.0.9
Enable both "Follow same domain" and "Follow same URL paths" so the crawler stays within scope.
Important reminders:
- Add trailing slashes to keep URL patterns consistent across documentation generators unless you intentionally require otherwise.
- Only ingest material you own or are licensed to redistribute, even if it is publicly reachable.
- Skip pages that do not help answer user questions; irrelevant content dilutes precision.
What else should I know about ingestion rate limits?
During a crawl, the platform intentionally limits outbound requests to roughly four per second per documentation site. This protects most documentation servers. If your site enforces stricter limits, contact Solverox to temporarily reduce the crawl speed or relax the limit on your side until ingestion completes. You can always watch the live status in the admin panel and re-enable your stricter throttling afterward.
How should I interpret the URL list tables in the Admin UI?
The tables expose the full lifecycle of every document source:
- Knowledge Base Sources: The authoritative list of all sources currently included.
- Added URLs: URLs that were ingested during the latest run.
- Removed URLs: URLs deleted or refreshed during the latest run. Seeing a URL in both Added and Removed indicates the contents were refreshed.
- Error URLs: URLs that could not be ingested or deleted (temporary network/DNS outages, broken links, or unsupported formats). Use the list to retry critical sources after addressing the issue.
What is my tenant ID?
If you subscribed through Atlassian Marketplace, your tenant ID follows https://<tenant_id>.atlassian.com. If you purchased outside Atlassian (for example via Paddle), the tenant ID was included in the welcome email sent to the primary contact.
Which warning messages might appear, and how do I resolve them?
| Message | Possible cause | Resolution |
|---|---|---|
| Admin UI: Unable to load current status / Backend status unavailable / Unable to reach the backend | Transient network hiccup or Forge outage. | Retry after a few minutes. |
| Admin UI: Tenant not found / Install the app or contact support | Appears immediately after installation while provisioning completes. | Wait for installation and payment confirmation. |
| Admin UI: Unable to save settings: <details> | Validation failed (non-HTTPS base URL, crawl limits out of range, missing encryption parameters, etc.). | Review every field, correct the invalid entry, then save again. |
| Admin UI: Unable to request metrics: <details> | An invalid filter or payload blocked the request. | Confirm the metric inputs and resubmit. |
| Admin UI: Monthly crawl limit has been reached | Your package quota was exhausted. | Check pricing tiers or email [email protected] to discuss an upgrade. |
| Chatbot Widget: Application is not active | The application is toggled OFF while users are in session. | Switch the application back ON to resume chat assistance. |
If the warnings persist, contact [email protected] with your tenant ID so we can investigate.
I need help to bring TS AI Agent up, can you help?
Every customer receives one free day of onboarding consultancy during the first month. In many cases we can extend that assistance if you need additional guidance. We will configure the agent together, review your Jira setup and workflows, and recommend improvements where needed. Email [email protected] to schedule time.