Atlassian AI Featured Plugins – A Survey

March 29, 2025

With so many firms adding AI capabilities to their software, understanding their differences and the value they can add to your development process is complicated. Thus, our research team at XBOSoft investigated and did a high-level evaluation of the functionality of 53 AI-applied plugins for JIRA, Confluence, and Bitbucket. Our research focused on the functionalities offered by these plugins, particularly AI capabilities such as text generation, summarization, translation, terminology explanation, and other features with context generated by AI. Our research provides general information on what these existing plugins do without delving too deeply into their technical implementations.

1. AI Plugin Distribution

  • JIRA: 30 plugins
  • Confluence: 22 plugins
  • Bitbucket: 1 plugin

2. Functionality

The AI related functionalities in these plugins can be categorized into the following key areas:

2.1 Confluence + AI Plugin

Conf-1 Area: Text Generation from Text

Description

  • Content Generation
  • Summarization
  • Translate
  • Terminology/specific jargon explanation
  • Ideas
  • Title
  • Comments according to the context
  • Codes
  • Checklist & action items
  • Forms
  • Tables
  • Polls
  • Tabs
  • Tags for categorization
  • Tooltips
  • Diagrams

Conf-2 Area: Text Generation from Video

Description: Generate summaries & key points from your videos

Plug-in Name: AI Assistant


Conf-3 Area: Question Creation

Description

Simply provide a topic title and AI generates relevant study materials and multiple-choice questions

Plug-in Name

Illumina


Conf-4 Area: Image Creation

Description

  • Create images
  • Create badges
  • Create buttons

Conf-5 Area: Categorization

Description

  • Label Suggestion
  • Analyzing customer feedback or interview reports are positive, negative, or neutral

Conf-6 Area: Chat Bot

Description

  • Select the Confluence spaces and pages to train the chat bot.
  • Then it can instantly answer your questions based on the Confluence wiki content provided.

Plug-in Name

2.2 JIRA + AI Plugin

Jira-1 Area: Content Generation/Refinement

Description

  • Epics
  • Breakdown epic into user stories
  • User Stories
  • Acceptance Criteria
  • Test Cases
  • Automation Scripts (snippets, incomplete code)
  • Release Note / Road Map
  • Sprint reports
  • FAQs

Jira-2 Area: Report Generation

Description

  • – Stress Report
  • – Custom daily/weekly/monthly/yearly reports (summary, list of issues etc)

Jira-3 Area: Insights

Description

  • AI integrates with data sources in JIRA to analyze information and generate reports that can include initial insights or suggestions.
  • Receive stress management tips and mental health support
  • Summarize Jira activity into actionable insights
  • Next steps, suggest next actions based on the text
  • Find ambiguities, identify ambiguous points in the text
  • Sentiment analysis

Jira-4 Area: Image Creation

Description: Extract text and Information from screenshots attached to JIRA tickets

Plug-in Name: AI Screenshot Insights


Jira-5 Area: Chat Bot

2.3 Bitbucket + AI Plugin

Bitbucket-1 Area: Pull Request

Description

  • Supports you during pull request reviews with the help of AI and by integrating static code analysis results
  • Better review with Static Analysis & AI insight
  • Get AI help during PR Creation and Review (opt-in)
  • Static Code Analysis & Software Component Analysis

Plug-in Name

3. Concerns and Limitations

Data Privacy
Some customers would be hesitant to share internal company data with external AI plugins, which could be a significant barrier to adoption. Companies may be wary of how AI tools handle proprietary or sensitive information and therefore, many want to adopt on-premises solutions to ensure that data is handled securely. Because users of the AI plugins on the Atlassian platform (JIRA, Confluence, Bitbucket) are requested to connect their ChatGPT or AI model account to access the plugin’s features, there are definite concerns regarding data privacy and confidentiality.

Quality of Generated Content
Current test case generation tools using large language models (LLMs), such as ChatGPT, Gemini, etc., automate the creation of test cases by generating titles, steps, and expected results based on user requirements or existing documentation. However, these tools typically do not automatically provide the necessary test data. Without the adequate and accurate training data, the old adage “garbage in, garbage out” applies to AI as well, leading to more manual work to examine the output. Therefore, to optimize the use of AI, users must supply training data which involves defining the application’s structure, environment data, and seed example data to guide the generation process.

Gemini AI asked to create test case for Jira Login

Need For Training Data
Without the training data, AI drives blind. It’s like asking an engineer to build a bridge without providing material specs or load requirements — they might assemble something, but it won’t align with the actual environment or constraints.

The test data generated from the AI models (Chat 4.0, Gemini, DeepSeek R1) that we tried can not be applied directly to testing. Training AI models requires real test data, which again, brings up privacy concerns. For test data to be meaningful in training an AI model, it should:

  1. Mimic real-world usage, dates, SKU, etc.
  2. Fit complex business rules or logics or a particular combination of fields or validations.
  3. Conform to precise patterns or values in a specific feature.
  4. Be valid across multiple systems (e.g., integrating with an external payment gateway, shipping service, or database). AI cannot simulate these interactions unless it’s been given a direct understanding of how those services work or is explicitly trained to handle these cases.

Plugin AI Model Deployment and Local Implementation
Given that many organizations have unique workflows and data formats, AI plugins usually can be customized for specific industries or workflows. Users who wish to get optimal results from AI while alleviating privacy concerns would need to need to deploy AI models locally and feed their enterprise data. However, this requires investment in computing resources and expertise.

4. Exploring AI Testing Tools Outside of Altlassian

The Atlassian plugins generally offer incremental improvements over existing functionality within the product line rather than transformative changes. None of these plugins currently provide test automation capabilities, although, there are many standalone SaaS applications or AI-powered systems to enhance software testing.

Claude AI Agent can convert natural language into computer operations, enabling automation, Claude | Computer use for automating operations. This could make automating software tests significantly faster, moving directly from user stories and requirements to automation. We found several companies and tools dedicated to developing solutions for automatically generating functional GUI test automation scripts. These solutions/tools often leverage advancements in artificial intelligence, machine learning, and natural language processing to simplify and expedite the testing process. Some notable companies and tools in this space include:

These tools, and many others, claim to accelerate software testing, but require much deeper research to understand the necessary inputs and quality of the output.

5. Summary and Next Steps

In this study, we first examined the plugins in the Atlassian marketplace to see which ones claimed to use AI to enhance their functionality. New and updated plugin availability demonstrates their focus to bring the latest technology to its community. With our own research using these plugins as well as having developed our own plugin and a skilled AI research team, we found a few issues around privacy concerns, generated content quality, training data requirements, and local implementation concerns.

Using the Atlassian infrastructure is just the beginning step to integrating AI into your test process. As such, we listed a few popular test automation tools that state their usage of AI in generating test automation. This list was not exhaustive and certainly deserves an entire study to understand where these tools use AI, the quality of the output, required test data, and ease of implementation just to mention a few criteria. If you’d like to be considered further as we continue our research, please inquire at [email protected].

6. Discussion and Additional Info

We are considering enabling AI in TestVia so users can configure the connection to the LLM API of their choice further simplifying test case generation. Once the AI feature is implemented, such as “AI-generated test cases based on simple requirement input,” it will prompt users to connect their AI model account. The general process works as follows:

  1. The plugin sets up a prompt template tailored to the specific task (e.g., generating test cases or refining the bug summary according to requirements).
  2. The plugin calls the AI API to process the request, regardless of any associated token charges.
  3. The software processes the AI model’s output (e.g., test cases) and integrates it into the plugin’s output.
  4. The plugin formats the output for the user’s needs.

Besides generating test cases from requirements, what else can be done?

In addition to simplifying test case generation, there are several other AI-powered features that can be implemented to enhance the functionality of TestVia:

  1. Generate Test Cases and Steps from User Inputs (e.g., User Stories, Task Descriptions, or Custom Prompts): The AI can analyze user-provided inputs, such as user stories, task summaries, or custom prompts, to automatically create relevant test cases and detailed test steps. This ensures comprehensive coverage of requirements while allowing flexibility for user-specific contexts.
  2. Generate Test Steps from Test Case Summaries/Descriptions: For existing test cases, the AI can analyze their summaries and descriptions to automatically generate detailed test steps, saving time and ensuring consistency.
  3. Intelligent Summarization of Test Runs or Milestones and Report Generation: The AI can analyze test results and intelligently summarize the outcomes of test runs or milestones. It can then generate detailed reports, highlighting key findings, trends, and areas for improvement.

Thanks to Will for sharing his findings about AI-related plugins in Atlassian, and to Neil for contributing information on AI automation capabilities.