GitHub recently expanded its CodeQL-based code scanner to include the following possibilities: Specify the required threat model. This new feature is available in the beta version of the Java language.
This new feature is implemented as a setting that allows users to choose which features they want to use. threat model It should be used to determine which input data is reliable and which data should be considered as potential sources of risk for the system.
By default, CodeQL uses a threat model that considers any remote source, including HTTP requests, as follows: dirtyThat is, it is not trusted. According to GitHub, this is fine for most codebases, but many will want to expand their set of tainted input data sources by including local files, command line arguments, environment variables, and databases. Sho.
can be enabled local threat model Code scanning options allow security teams and developers to discover and fix more potential security vulnerabilities in their code.
Use the GitHub UI to enable new options. threat model The settings are query suite This setting allows you to select a group of CodeQL queries to run against your codebase.
Alternatively, you can enable it by specifying: threat-models: local
In the action workflow file.
Finally, running CodeQL scans through the command line or third-party CI/CD allows you to: provide --threat-model=local
flag.
GitHub extends CodeQL settings to allow you to specify the threat model to use, making code scanning solutions more adaptable to different codebases by providing specific information about the context in which code scanning occurs. I am.
Understanding the threat model associated with a system or codebase is a critical step in ensuring its security.by Threat modeling manifesto, this type of analysis starts by identifying what could go wrong and listing all possible threats. Threats are typically unique to each system and vary depending on how it is designed and implemented.
As with many security-related practices, the earlier a threat is identified, the better it can be addressed. Code scanning can be considered a “shift left” approach to improving system security, but if it fails, mitigations can be defined at a later stage in the system’s life.
Adding support for local threat models is definitely a step forward in GitHub’s service, but there’s a lot to be desired. Additional aspects to code scanning threat modeling are not yet covered, including authentication, execution frequency, accessed resources, protected assets, etc..