Web application vulnerability Scanning, in the palm of your hands

At CovertSwarm we pride ourselves on being able to provide the Offensive Operations Center as a centralized platform for managing and identifying risk to your organization.
Following on from our recent blog post on why web application security and why it is important. One of the many new features we have recently implemented is the ability for our users to execute discovery and vulnerability scans against their application stack.
This blog post will provide an insight into how the web application discovery and vulnerability scanning works, and will talk a little bit about the difficulties faced as a development challenge.
The Web Application Scan feature is one of our Automated Reconnaissance tools that allows users to schedule recurring scans against their application assets.
The purpose of this tool is to discover more about the targeted assets through spidering of the web pages and the configuration of the applications, but also to enumerate and report on any vulnerabilities that may be present at the time of the scan.
Some examples of the potential findings that can be identified from the web application vulnerability scanner include:
Setting up a scan is simple. You’ll need to create a new scheduled job via the Automated Reconnaissance page.
From here, select the automation type and provide a friendly name for the scan to differentiate from other scans that may have been created.
Next, select a target for the scan.
Due to the potential length of time a scan can take only one target per scan can be configured, but multiple ports per target can be set later in the configuration section.
You can change the target at any time in the future after the scan has been initially set up. Multiple scans for different target assets can be configured as required.
The configuration section is next, which is used to affect how the scan performs both in terms of scan ‘speed’ and level of depth. Here the scan type of ‘Passive’ or ‘Active’ can also be selected.
Firstly, within the ‘General’ section, there is option for limiting the scan to only target the root path (‘/’). This can be useful in situations where a ‘Passive’ scan is selected, and basic vulnerability and enumeration checks will be performed. For example, to only identify insecure header configurations for the base page.
Additionally, this section allows for the selection of the target ports for the selected asset. Multiple ports can be provided if there are web services that run across different ports. When you click into the port input field the list will pre-populate with any known web services by the OOC.
Ensure you select whether the service is HTTP or HTTPS if you are inserting a new target port.
Next is the ‘Spidering’ section. The configuration options here will default to a reasonably fast scan but can be further adjusted as needed for each application. The two options here are:
The basic page crawling via spidering is always enabled to enable the discovery of pages for the scanner.
The final configuration section is the ‘Scanning’ options. Here you can see that there are two scanning types:
If you are running a scan for the first time it is recommended to use Passive scanning to get an understanding of the target’s attack surface and how the scan may affect the target before running an Active scan.
When selecting an Active scan there are two additional options that can be adjusted to configure the following:
Finally, you can then set the recurring schedule as desired.
Choosing Weekly will allow you to select the day of the week to re-run the scan, whilst Monthly will allow you to set a date during the month to run the scan on a monthly interval. The time zone can be selected to adjust to your local time as needed.
If you’re a little technically inclined and want to know more about how our setup works, or you’d like some reassurances of how we’re handling data, let us walk through a bit of how the scanners work on the back-end.
Once a scan is setup, each of the relevant configuration items (schedule, port selection, target selection, etc.) are all stored in a query-able format. This is so the Offensive Operations Center scheduling ‘orchestrator’ can quickly identify the next scan(s) to run for you and any of our customers.
When a scan’s scheduled start time is reached, the orchestrator aggregates all of the configured options and pushes these to the relevant automation. In this example, it will be the web application scanner. Some of this information includes additional details such as the organisation identifier, who created and last modified the scan, and so forth – this type of information is used to later attribute information to assets, services, URLs, and vulnerabilities (if identified!).
Crucially when a scan is started, and for the sake of this example let’s say there are five different scans from different organisations/users, this will create five new separate instances of the scanner. These five instances run completely independently of each other, and will provide separate individual output files that will be imported into the Offensive Operations Center later on. We take security very seriously here at CovertSwarm, and segmenting data is one basic piece of the puzzle.
With the scan now initialised there are several stages that the instance will run through:
Once a scan has completed, its next ‘trigger’ time will be updated as per the scheduled settings and the cycle will continue to ensure the cyber risk gap is minimised as much as possible.
We’re always looking to innovate here at CovertSwarm. There are many new features being added to the roadmap and we’re always listening to our customers to hear ways in which we can provide a better user experience, simplify their workflows, or enrich the data we capture and process.
However, as a sneak-peak of some of the features we’re investigating specifically relating to the web application scanning: