Coverity 2020.12 Help Center


Preface
1. The Static Analysis Solution
1.1. Checkers
1.2. Deployment Architecture
1.3. Combining Results from Coverity and Other Analysis Tools
2. Working with Static Analysis
2.1. Roles and Responsibilities
2.2. Basic Workflow
3. Customizing and Extending Static Analysis
3.1. Global Choices
3.2. Checker, Language, and Context-based Choices
3.3. New Custom Checkers
4. Use Cases
4.1. Securing Web Applications
4.2. Addressing Coding Standards Violations
4.3. Using Coverity to Differentiate Your Product in the Marketplace
5. Documentation for Installation
6. Documentation Set
7. Getting Started
7.1. Getting Started with Coverity Platform
7.2. Getting Started with Coverity Analysis
7.3. Getting Started with Coverity Desktop
8. Online Resources

Preface

This book introduces Coverity, a Static Application Security Testing (SAST) and issue management tool.

  • It describes Coverity's advantages as a testing solution, explains how it works, describes the roles and workflows involved, and summarizes the ways in which you can customize or extend your analysis.

  • It lists and describes all the books in the Coverity Doc set and provides information about where to go for additional help.

You should read this book if you work with static analysis as a developer, administrator, or manager.

This book focuses on the use of Coverity as a standalone product. You can also use Coverity as a part of Polaris Software Integrity PlatformTM. This book does not describe that use case. For information about using Coverity on Polaris see Polaris User's Guide.

Chapter 1. The Static Analysis Solution

Software testing is a critical step in the development process. Coverity is a static analysis solution that makes it possible to address software issues early in the development life cycle by analyzing source code to identify the following kinds of problems:

  • software quality and security issues

  • violations of common coding standards

The static analysis solution includes analysis tools as well as management tools. Analysis tools scan your code and flag issues. Management tools allow you to store results, to fine tune the testing configuration, to monitor trends, and to produce reports. You can also use Coverity tools to manage issues found by third-party tools.

As a testing method, static analysis offers the following advantages:

  • You can test code as soon as there is one function that can be parsed. You don't need to have a buildable or working system to do analysis.

    Static analysis allows you to correct problems before they become embedded in your code and require costly fixes or workarounds.

  • You test every possible path through your code.

    As applications grow, achieving test coverage using dynamic testing methods becomes costly and computationally prohibitive. Coverity can test all paths through the code, even ones that are extremely difficult to test manually such as error conditions that would only be triggered in the case of hardware failure.

  • It is deterministic: analysis of the same code base yields the same results.

  • It is able to analyze large code bases very quickly. Coverity uses algorithms that are designed to scale for large applications.

To find issues, Coverity first scans your code and then calculates a call graph. Based on the dependencies defined in the graph, it derives all possible paths through your code. Finally, it traverses every path looking for events that result in security or quality issues, and it displays those issues as they occur in your source, with information about each issue's cause and remediation.

Here's an example of the sort of information displayed for an issue:

Figure 1.1. Example: Information Displayed for an Issue

Example: Information Displayed for an Issue

Note that in addition to flagging the Main Event (issue), the analysis engine can also identify contributing events and control structures related to the offending issue. That is, Coverity doesn’t just analyze code within the context of a specific function, but analyzes execution flows. Hence a defect might start in one function and terminate in another function or class. In each case, Coverity explains how it determines that an issue exists.

Analysis can be carried out using build-based (for compiled languages) and buildless capture methods. Which method you choose depends on the source language and on the amount of work you are willing to invest in configuring the analysis.

Analysis testing and results can be integrated with your IDE, continuous integration (CI) system, source control system, and bug reporting system.

1.1. Checkers

The analysis of your code is done by a collection of programs called checkers, which are the foot soldiers of analysis. Each checker looks for a specific kind of issue, which can range from the simple to the complex. A simple checker might flag a missing break statement or find a bad comparison. A more sophisticated checker might find code that is vulnerable to cross-site scripting attacks or might flag a method call that is not guarded by an authorization check. There are many possible categories of issues, among them:

  • Memory corruption

  • Resource leaks

  • NULL object or pointer dereferences

  • Thread concurrency

  • Web application security flaws

  • Lines, files, and functions that are insufficiently tested

Coverity Analysis uses hundreds of checkers, and supports over a dozen programming languages.

Coverity also includes checkers that analyze your code to test its adherence to a variety of coding standards: MISRA, CERT, OWASP, and so on.

When you install Coverity, a given set of checkers is enabled by default. If you want to change that, you can enable or disable different checkers when you configure analysis. After looking at results, you can further refine the analysis by redefining its scope, by filtering out certain results, or by customizing the behavior of specific checkers. Checker behavior and performance is constantly checked and updated to minimize false positives and to improve performance.

1.2. Deployment Architecture

A basic Coverity deployment consists of the following two components:

  • Coverity Analysis – Analyzes the code base.

  • Coverity Connect – Manages code defects; it uses a database to store analysis results.

Deployment architecture addresses the configuration of these two components. Different types of deployments support different workflows.

Analysis can be local, central, or a combination of the two. There are many models of deployment architecture that vary from running it on one machine to fully automated Continuous Integration (CI) with automatic triage and assignment. The following sections provide two basic examples.

1.2.1. Central Analysis

With central analysis, the code is built and analyzed on a shared build server. The following diagram illustrates a basic Coverity deployment. As shown: Coverity Connect is installed on a separate host, along with its embedded database. The process would include the following steps:

  1. Coverity Analysis is installed on a build server where the artifacts of the build are analyzed.

  2. At the conclusion of each build-and-analysis run, code issues that have been discovered are committed to Coverity Connect as issues.

  3. Developers use their clients to connect to the Connect server and check out the code for which they are responsible.

  4. Developers examine the issues found, and attempt to resolve them.

  5. Developers check their code in again, and another analysis is run at the scheduled time.

  6. As multiple developers perform steps 3-5, Coverity Connect tracks each issue's history and evolution to allow managers to look at trends and generate progress reports.

Figure 1.2. Central Analysis

Central Analysis

1.2.2. Local and Central Analysis

In the deployment example just described, all code analysis is performed centrally on the build server. The following deployment example augments the previous example by adding Coverity Analysis to the developer's machine. This deployment supports a workflow like the following:

  1. Coverity Analysis is installed on a build server where the artifacts of the build are analyzed.

  2. At the conclusion of each build-and-analysis run, which happens daily or whenever code is checked in, code issues that have been discovered are committed to Coverity Connect as issues.

  3. Developers use their clients to browse the Connect server and review issues that have been assigned to them.

  4. The developer performs analysis locally, and resolves issues.

  5. The developer checks in fixed code.

  6. The central build also runs an analysis to discover issues that arise from the code's interaction with other checked-in code.

  7. The developer resolves any additional issues discovered in the central analysis and checks in code again.

To support this model, the Code Sight or Coverity Desktop plug-in is installed in the developer's IDE, allowing the developer to find, examine, fix, and analyze code directly in the IDE.

Figure 1.3. Combined Analysis

Combined Analysis

1.2.3. Advantages of Multiple Testing Models

The availability of different testing models makes it possible to introduce static analysis without alienating developers who like to keep control of their processes. Rather than forcing them to use another tool, you can first deploy Coverity in the nightly and weekly Jenkins continuous integration builds. Development managers can then present Coverity findings as work items, email, or issues in code reviews. Voluntary, hands-on use of Coverity will grow as developers elect to fix their own issues, keep from breaking builds, and keep their names off the 'hit list'.

1.2.4. Deployment Options

Coverity supports many deployment options. Basic options involve the following:

  • Using the embedded PostgreSQL database or configuring Coverity Connect to use your own external PostgreSQL database.

  • Deploying Coverity Connect as either a stand-alone application or in a cluster.

  • Configuring multiple Coverity Analysis instances to commit issues to the Coverity Connect server.

1.3. Combining Results from Coverity and Other Analysis Tools

A complete analysis of your code might require results from tools other than Coverity. Mature software development organizations often have legacy tools that have gathered valuable information.

You can import results from other analysis tools into a Coverity project using Third Party Integration Toolkit. Once imported, these results can be shown interleaved with Coverity findings. This allows you to make your analysis more efficient by looking at all results in one place. It also makes it possible to compare the effectiveness and scope of analysis performed by different tools.

Chapter 2. Working with Static Analysis

The following sections describe the roles and responsibilities of working with static analysis and the basic workflow.

2.1. Roles and Responsibilities

Coverity Static Analysis involves a number of players; their roles and their privileges are summarized below:

  • Administrator - Installs, configures, and sets up product packages and components. Provides access to users and groups.

  • Dev/Ops - Advises administrators about setup and configuration issues, works with administrators to integrate Coverity in the product build cycle, advises on analysis tuning and customization.

  • Developer - Views and fixes issues found by static analysis. Developer team leaders monitor trends related to overall quality of development work.

  • Manager - Triages issues, uses filters and dashboards to monitor and manage results, generates reports.

Each of these roles plays a part in the basic workflow, described next. (Note: the actual role names in Coverity differ from these, which are used more generically to describe responsibilities.)

2.2. Basic Workflow

The basic workflow varies with your deployment and depends on whether the build is done using the GUI or using commands or scripts. In this section, we will assume a Command Line Interface (CLI) based approach to better explain each step. With respect to the roles we just described, the steps are discharged as follows:

  1. Setup:The administrator installs Coverity and configures maintenance tasks.

  2. Configure: The administrator or Dev/Ops provides information about the language of the source files to capture and analyze, and for build capture, provides settings that are used to emulate your native compiler, its options, definitions and version.

  3. Analysis: involves the following sub-steps. Whether these are implicit or explicit depends on whether you use the GUI or the CLI and on the degree of control you want over the analysis.

    • Capture: The developer or Dev/Ops creates the intermediate directory for the source code to be analyzed.

    • Analyze: The developer or Dev/Ops directs Coverity to scan the code using currently enabled checkers.

    • Commit: The developer or Dev/Ops commits the defect database and summary to the Coverity Connect server.

  4. Organize: The developer or Dev/Ops filters and inventories issues and related data.

  5. Triage: The developer or Manager triages issues. These are fixed, dismissed, or archived.

  6. Resolve: The developer updates code to resolve the issues identified during analysis.

  7. Report: The manager or Dev/Ops monitors dashboards, evaluates trends, and generates reports.

These steps are described in the following sections. Of course, in many deployments, some of these steps would be automated.

2.2.1. Setup

In the setup stage, the administrator installs Coverity according to the preferred deployment model and schedules maintenance tasks.

  • Installing Coverity includes selecting the preferred database, creating projects and streams to identify the code base, and creating groups and users.

  • Maintenance tasks include managing the size of the database and scheduling backups, debugging with help from event logs, and upgrading.

  • When using an IDE plug-in, setup and maintenance includes configuring and managing the connection between the plug-in and the server.

Documentation Resources

  • Coverity Installation and Deployment Guide

  • Coverity Upgrade Guide

  • Coverity Platform User and Administrator Guide

2.2.2. Configure

In the configure stage, the administrator or Dev/Ops provides information about configuring the analysis for your project. The required information varies with the type of analysis you perform:

  • For compiled languages, specify the settings to be used by the analysis engine to emulate your native compiler. Furnish the information needed about the build processes, dependencies, and build-related programs used in building the code to be analyzed.

  • For scripting languages or buildless capture, specify the files to be analyzed. Typically, you want to analyze source code, configuration files, and any library code that your source code needs to compile or run.

You provide configuration information using a JSON configuration file.

Documentation Resources

  • Coverity Analysis User and Administrator Guide

2.2.3. Analyze

The work of analysis is done in three stages: capture, analyze, and commit. The stages are usually performed sequentially on the same machine. Here we describe each of these component steps, which you might want to configure independently to assist debugging and to support advanced use cases.

  • Capture

    In this stage, Coverity captures a representation of your source code (whether compiled or file-based) and stores it in a known location, separate from the build artifacts. Coverity analysis does not modify source code or compiled binaries.

  • Analyze

    During this stage, the developer or Dev/Ops uses the GUI, the CLI, or a script to scan binary representation of the code from the capture stage for issues or rule violations.

  • Commit

    In this stage, analysis results are committed to the database (a collection of analysis instances corresponding to a release branch). The command or script that initiates the commit specifies the data-storage location, the connection information for the Connect server, and the user credentials.

[Note]

Third Party Integration Toolkit is available to combine third-party issues with the Coverity Connect database.

Resources

  • Coverity Analysis User and Administrator Guide

2.2.4. Organize, Triage, and Resolve

The developer or Dev/Ops can use a number of different clients to organize and triage the issues found and committed during analysis: each of these clients provides descriptions of the issues and shows where the issues exist in the source code.

  • Coverity Connect is a web-based application that enables you to manage and fix issues found using Coverity Analysis and third-party tools.

  • Desktop Analysis can be used from the command line, or from your IDE if you use a plug-in. Supported plug-ins include Eclipse, IntelliJ, or Visual Studio IDE.

  • Code Sight is a plug-in that runs in a number of IDE applications and helps you quickly find quality and security issues in your source code. It highlights issues directly in the environment’s editor.

  • Coverity Policy Manager is accessible from the Connect GUI. You use it to build decomposed and aggregated views of your software, and you use your findings to better align reporting with business objectives. As an example, you might want to separate internal and external applications. Or you might want to look at only web-facing components, or to look at only components that handle personally-identifiable information, and so on. Many applications consist of safety-critical parts plus much larger (in terms of lines of code) user interfaces, and you might want to focus on the safety-critical parts.

Using these tools, you can organize your Coverity Analysis results. You can sort and organize issues based on issue type and priority, you can assign some for immediate resolution, and you can schedule less critical issues for the future. Integration with SCMs, email, and bug tracking systems, such as JIRA, allows you to use existing processes to carry out this work.

Once issues have been organized and prioritized, they can be triaged. Triage data and history are stored in a common database. All issues are categorized into a single workflow so developers can see what needs to be resolved first. A developer might do one or more of the following:

  • Receive notification (email, JIRA work item, and so on) or log into Coverity Connect and look at source code.

  • Review the defect - debug, run another program, and so on.

  • Fix/Dismiss, and so on, according to workflow and type of defect.

Resources

  • Coverity Platform User and Administrator Guide

  • Coverity Platform Web Services API Reference

2.2.5. Report

During the report stage, the manager or Dev/Ops can look at many different projects, releases, and so on, in order to examine dashboards and trends, and to create reports. Possible actions include:

  • View and organize dashboards and charts in Coverity Connect.

  • Generate reports:

    • Coverity Integrity Report

    • Security Report

    • Coverity MISRA Report

    • Synopsys Software Integrity Report

    • CVSS Report

    • CERT Report

    • OWASP Web Top 10 Report

    • Mobile OWASP Top 10 Report

    • PCI DSS Report.

  • Report on output from the Coverity Policy Manager as code is tested for adherence to policies for code security and quality.

Resources

  • Coverity Platform User and Administrator Guide

Chapter 3. Customizing and Extending Static Analysis

Default settings have been designed to support most analysis targets most of the time, but Coverity also offers a number of ways to customize analysis. Customizing Coverity is first done by fine tuning deployment and analysis configuration to improve performance and manage findings. You can further customize analysis to support atypical applications or deployment environments, and to improve analysis results by eliminating false positives and false negatives.

There are simple, global ways to adjust the results of an analysis, more specific ways to tune analysis results, and for advanced users, ways to extend the capabilities of the analysis itself. The following sections describe each of these options.

3.1. Global Choices

Using simple global customization choices, you can specify subsets of the code base to analyze, specific options to use during analysis, and so on. You can also choose to trust or distrust different kinds of data sources.

  • The cov-analyze command, which you invoke to analyze code, provides options for coarse-grained, global control over which code is analyzed and which checkers are run.

  • The application-wide trust model can classify various kinds of data sources as either trusted, or distrusted and potentially malicious. You can specify that different kinds of data sources be trusted or not; for example HTTP requests, filesystems, remote procedure calls, databases, HTTP headers, and more.

Resources

  • Coverity Command Reference

3.2. Checker, Language, and Context-based Choices

Customization options that are particular to certain checkers, to a certain language, or to other conditions, provide a more fine-grained way to customize the results of an analysis.

  • Checker-specific options

    You can use checker options to tune checker behavior. The most common reason for using these options is to reduce false positives or false negatives.

  • Custom models

    When Coverity scans the code for a statically typed, compiled language—such as C, C++, C#, Java, or Visual Basic—for each function in the source, it generates a model that abstracts the function’s behavior at execution time.

    You can write your own model of a function, in order to override the model generated by Coverity and to better describe the function’s behavior. Custom models can be useful both for finding more bugs, and for eliminating false positives.

  • Code annotations

    You can change analysis behavior by adding analysis annotations to the source code being analyzed. These annotations help Coverity Analysis interpret function behavior. Annotations can be used to suppress reports of code patterns that have an intentional purpose despite any vulnerabilities they might enable.

  • Analysis directives

    Security analysis directives are an expressive configuration format for providing hints and describing patterns that cannot easily be captured using a model or annotation. They also form the backbone of API description for dynamically typed languages, which require a dataflow-based approach to identify object types of interest.

Resources

  • Coverity Analysis User and Administrator Guide

  • Coverity Checker Reference

3.3. New Custom Checkers

Rather than tuning or modifying the behavior of existing checkers, you might want to customize the analysis by adding special-purpose checkers of your own design. Coverity provides the following ways to do so:

  • Generic customizable dataflow and text checker frameworks

    A dataflow checker (DF.CUSTOM_CHECKER) reports when untrusted strings, streams, and byte arrays from a tainted source are propagated through the program and used at an unsafe sink. Many security vulnerabilities fit this general pattern: these include injection issues, data exposure, insecure object references, and more. Custom checkers can specify a trust model that enhances Coverity Analysis’s extensive built-in modeling of data sources.

    A text checker (TEXT.CUSTOM_CHECKER) can match patterns that indicate illegal data, misconfiguration, or other issues of concern. The patterns to match can be either regular expressions or XPath queries.

  • CodeXM checkers

    CodeXM is short for Code eXaMination. It is an interpreted language you can use to write customized checkers. It allows you to define specific patterns that you want to find in your source code. It exposes the underlying abstract syntax tree, which is a data-structure representing the source code to be analyzed, and lets you scan this directly for matches. CodeXM can also detect certain conditions based on program states; for example, an execution path.

Resources

  • Coverity Checker Reference

  • Coverity CodeXM documentation

Chapter 4. Use Cases

Coverity is a rich, extensible static analysis tool. To help visualize its use, we offer the following cases as concrete examples of the application of static analysis.

4.1. Securing Web Applications

Coverity Static Analysis can keep your web applications secure by helping you find security issues before the bad guys do.

The analysis detects when unsafe data enters your Web application from the HTTP requests, network transactions, untrusted databases, console input, or the file system. It tracks this unsafe data, and if the data is used incorrectly within a context, Coverity reports this usage as an issue. Coverity provides you with actionable remediation advice for the technologies in use. It can flag the following vulnerabilities:

  • SQL injection

  • Cross-site scripting

  • OS command injection

You can enable specific groups of checkers for securing web applications when you configure analysis.

In addition to identifying problematic areas, Coverity offers open source libraries of sanitizers that can protect vulnerable code for Java and C#.

4.2. Addressing Coding Standards Violations

The use of compliance checkers, such as AUTOSAR, MISRA, CERT C, and CERT C++, is most often required by customers' contract with their own customers. Using Coverity, you can select validation for specific standards when you configure analysis. Coverity provides validation for the AUTOSAR, DISA, ISO TS 27961, MISRA, SEI CERT, and OWASP web and mobile standards.

However validation against these standards poses its own challenges:

  • Validating the coding rules defined by these checkers often generate huge numbers of findings.

  • Additionally, different components of the product under development, such as system libraries, non-critical code, or third-party code might generate issues in numbers which dwarf (and hide) issues found in the critical IP sections.

For these reasons, compliance checkers might take a significant amount of time and generate very large databases on the Coverity Connect server. Coverity offers a number of strategies to manage these problems:

  • After learning the customer's product design and build system, you can skip over tangential code by injecting analysis only in the required sections.

  • You can further minimize the time and resources required for analysis by running compliance analysis separately from quality and security analysis.

  • You can post compliance results to different Coverity Connect servers than those used for quality and security issues.

  • You can adjust the cadence, and thus the expense, of analysis to match the rate at which issues are required to be addressed. While you might want to check quality or security results daily, you might only need to check compliance results once a week or once a sprint.

4.3. Using Coverity to Differentiate Your Product in the Marketplace

Software development is a highly competitive industry. Demonstrating the robustness and quality of your code can help differentiate your product in the marketplace. Coverity can help you do that.

For example, two small Houston-based companies, a NASA contractor and an Oil and Gas formation analysis service vendor, described their use of Coverity to prospective customers as a best-in-market tool used to ensure the quality and security of their products. The NASA contractor negotiated the inclusion of the Coverity Integrity Report as a regular deliverable item. Assuring 'Coverity Clean' development helped these vastly different businesses achieve the same goals - differentiate in the marketplace, win contracts, make sales, and grow their business.

Chapter 5. Documentation for Installation

Use the following documents for installation and upgrade:

Initial Setup and Planning (Coverity Platform, Coverity Analysis, and Coverity Desktop)

Chapter 6. Documentation Set

Coverity provides a rich and extensive documentation set numbering in the thousands of pages. But, depending on your role, you are not likely to have to read more than a subset. This section describes the Coverity documentation set other than installation.

Coverity Platform (Coverity Connect) Components and Extensions (Issue Management, Reporting, Web Services)

  • Coverity Platform (Coverity Connect, Coverity Policy Manager, Coverity Integrity Report, Security Report, MISRA Report, Synopsys Software Integrity Report): Setting up and using Coverity Platform components.
    Coverity Platform User and Administrator Guide (PDF )

  • Web Services: Creating web applications or scripts that communicate with the Coverity Connect database.
    Coverity Platform Web Services API Reference

  • SonarQube plug-in: View and triage Coverity issues from the SonarQube environment.
    The Sonar plug-in GitHub repository contains .jar files for the latest version of the plug-in, along with installation and use instructions in the Coverity.Sonar.Plug-in.pdf file.

Coverity Analysis Components and Extensions (Analysis, Compiler and Third-party Issue Integrations)

Coverity Analysis : Custom Checker Development (CodeXM and Extend SDK)

Coverity Desktop Analysis (Local Analysis and Coverity Desktop Plug-ins)

Architecture Analysis

Architecture Analysis helps you create structured hierarchies of your source file directories. Information on Architecture Analysis can be found in the following documents:

Reference Guides (Coverity Platform, Coverity Analysis, and Coverity Desktop)

  • Coverity Commands: Understanding the commands used to set up and run analyses and to perform other important tasks.
    Coverity Command Reference (PDF )

  • Coverity Checkers: Understanding the Coverity checkers that find issues in your source code, modeling library functions/methods, understanding security issues, and getting remediation advise. End users need to understand the issues found by checkers. Administrators can enable and disable the checkers that are used to analyze source code. Developers can improve analysis results by creating models that simulate the behavior of library functions/methods.
    Coverity Checker Reference (PDF )

  • Coverity Security Directives:
    Coverity Security Directives (PDF )

Chapter 7. Getting Started

After installing or upgrading product components (see Initial Setup and Planning (Coverity Platform, Coverity Analysis, and Coverity Desktop)), you can get started with Coverity Platform, Coverity Analysis, or Coverity Desktop.

7.2. Getting Started with Coverity Analysis

Coverity Analysis is primarily for administrators (and power users) who need to set up and run analyses of code bases. You can get started with analyses of your source code by using the GUI-based Coverity Wizard or by using the command line. For details, see the following documentation:

7.3. Getting Started with Coverity Desktop

Coverity Desktop is for end users who need to set up and run local analyses and examine the resulting software issues from an IDE (integrated development environment).

Getting Started with Coverity Desktop for Eclipse

  • Use Eclipse-based IDEs to find, manage, and fix software issues from your desktop.

Getting Started with Coverity Desktop for Microsoft Visual Studio

  • Use the Microsoft Visual Studio IDE to find, manage, and fix software issues from your desktop.

Getting Started with Coverity Desktop for IntelliJ IDEA and Android Studio

  • Use the IntelliJ IDEA and Android Studio to find, manage, and fix software issues from your desktop.

Chapter 8. Online Resources

Coverity Customer Center (requires login)

  • Product downloads and access to the Upgrade Overview, Release Notes, Installation and Deployment Guide, and other technical documentation. To install or upgrade this release, you need to read these documents.

  • Information and resources such as sample scripts to use in your deployment.

If you have questions, contact .