Follow

Process Deployment Planning: Building a Process Deployment Plan

Once you map the tasks to specific process steps you can build a process deployment plan with the following components:

  • Process diagram(s) and activity descriptions
  • RACI chart(s)
  • Expected outputs
  • Handling changes
  • Frequency of use and level of rigour guidelines

Process Diagram(s)  and Activity Descriptions

A process diagram illustrates how each SD Elements activity fits within the development process.  Use an existing diagram or build a new one that illustrates major process steps. Annotate the steps with activities where applicable. Provide a description of each activity in the diagram in this section.

Example diagram showing SD Elements in a waterfall process

RACI Charts

Responsible, Accountable, Consulted, and Informed (RACI) charts accompany process diagrams to explain the roles involved in each SD Elements activity.  The charts should outline the key roles for the organization and process and their involvement in each SD Elements activity. Each activity should outline one person who is responsible (“R”), one person who is accountable  (“A”), optionally people who are consulted (“C”), and optionally people who are informed (“I”). Organizations with rigorous processes such as highly regulated industries will have many different roles involved, while lighter weight organizations may have everything fall on just one person. 

An example RACI chart for a waterfall development team:

Activity

Project manager

Technical architect

Lead developer

Developer

Information security analyst

Create & model project, set filters

A

R

   

C I

Review tasks

A

R

C

   

Integrate with JIRA

A

 

R

   

Handle development tasks

A

 

R

I

 

Review activity

A R

     

C I

Import Fortify tool results

I

     

A R

Perform “verification” test tasks

I

 

A R

A R

 

Perform “testing” test tasks

I

     

A R

Generate reports

A R

     

I

Archive project

A R

C

   

I

Expected Outputs

Every process within an organization may have different expectations of work products or outputs from using SD Elements. For example, some processes will require several different reports being generated while others will simply need the tasks created in an ALM tool. Try to take audit requirements into consideration when selecting expected outputs.

You should determine the optimal outputs for the particular process you are working with. Some examples include:

  • Tasks created in ALM tool
  • Completion status report
  • Specific compliance report
  • All tasks report
  • CSV export to be added as table into requirements document

You should also specify the location of these outputs. For example, stored in a shared portal or project management software.

Handling Changes

Creating a project in SD Elements, reviewing every activity, and marking task status can be too much overhead for every change to the system. Your plan should outline how to handle system changes, either using new release functionality, a development phase strategy, or both.

New release

The new release functionality models only what had changed between different releases using the  "Changes since last release" questionnaire.  You can optionally carry over the task status and history between the releases if you are only concerned with new requirements

A common best practice is to carry over requirements and architecture phases task status while resetting development and testing phases task status in a new release.

Development phase strategies

For fast moving development organizations, you may want to keep a single project open to handle all the changes until the next baseline release. In this case, select one or more appropriate development phase strategies for changes.

 

Frequency of Use & Level of Rigour Guidelines

Frequency of use

Over time, it's possible for new vulnerabilities to be introduced into the system that may not be captured by looking only at incremental changes. As a result, you should periodically perform additional baseline reviews: that is,  emulating the first usage of SD Elements in an application by creating a new release that does not carry over any task status and keeps all "changes since last release"  questions checked . The frequency of these reviews should vary by the application's risk. If your organization has an existing application classification scheme you should use it here. If not, you should suggest one.

In addition,  you should consider how frequently applications should model new releases / iterations. You can achieve optimal security by using SD Elements continuously. This, however, may not be realistic in your organization. You should consider a few different factors when suggesting how frequently to use SD Elements within a given development process:

  • Application risk profile
  • Cultural tolerance for process overhead
  • Leadership's stance on importance of software security vs other priorities

In most organizations, development teams perform almost no application security activities at the onset of development before using SD Elements. In these cases, you should consider a phased deployment that starts with a relatively low frequency and moves to a higher frequency. For example biannual baseline reviews in year  1, model every monthly release in year 2 for high risk applications. You may want to stipulate the frequency period in terms of number of iterations or releases (e.g. every 4 iterations) if that makes more sense than a calendar period for the development phase.

Level of rigour

Not all applications have the same need for security requirements. You can adjust the number of tasks that each development team needs to work on by specifying a level of rigour: a minimum priority for requirements, architecture & design and development tasks and guidelines on which testing tasks to include. As with frequency of use, the level of rigour guidelines should primarily be based on application risk. You may also employ a phased deployment strategy where applications start with a relatively low level of rigour and eventually move to a higher level.

First define the levels. For example:

  • Level 1: Review high priority tasks in all phases but do not actually implement or test them. This is a “preview” of the level of effort that using SD Element might take. This is particular useful to allow teams to understand how they might use SD Elements before they are mandated to do so.
  • Level 2: Filter for high priority requirements, design and development phase tasks. Work on any incomplete tasks. Integrate automated scanner for validation but do not perform any manual tests using tasks from the testing phase.
  • Level 3: Same as level 2, but include medium priority tasks in requirements, design, and development. Also include manual testing phase tasks that verify requirements, design, and development tasks which were not otherwise verified by a scanner.
  • Level 4: All tasks in-scope

Next, map the levels of rigour to the application risk level.

The table below is an example of mapping application risk to level of rigour and frequency of use in a two year deployment:

Application Risk

Year 1

Year 2

High

Frequency: Semi-annual

Level of rigour: 2

Frequency: Quarterly

Level of rigour: 4

Medium

Frequency: Annual

Level of rigour: 2

Frequency: Semi-annual

Level of rigour: 3

Low

Frequency: Annual

Level of rigour: 1

Frequency: Annual

Level of rigour: 2

 

Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request

Comments