Clusters of CLM applications without a GUI

I haven’t had much time to write much over the past few weeks as I’ve spent most of my time writing up https://jazz.net/wiki/bin/view/Main/DeployingCLMClusterWithWASND and a week with customers in Seoul. A bit cold in Seoul, especially going from a warm though wet Sydney, but I love the food in Seoul and managed to keep warm.

Working on the clustering stuff has meant I’ve done very little on the Jazz side and a lot ( and I mean a LOT:-) on the Websphere side. Which reminded me of a conversation I had with Alex (http://alexetestelle.almandra.com/) around simplicity. Alex and Estelle had just completed a cycling tour of New Zealand and were visiting us in Sydney on their way back to France. Life on the road was pretty simple for them: not much more than the clothes on their back, their bicycles and panniers, and an iPhone. Alex is also pretty handy with computers and after overhearing a long conversation I had about troubleshooting an RTC installation on WAS, he shook his head and said “Ziz ees too complex no?” (No, he doesn’t really speak like that but is my rendition of his Gallic accented English.) We went on to talk about whether the IT world is guilty of making things unnecessarily complicated, Websphere being a case in point. I’m all in favor of simplicity or at least a facade of simplicity, taking a cue from what I see in nature: simple and beautiful patterns everywhere hiding incredibly complex systems, which funnily enough, are usually the result of fairly simple interactions. After the last few weeks, I’m craving even more simplicity,  though I’ve learnt a ton more about WAS.

Continue reading “Clusters of CLM applications without a GUI”

Digging out RRC custom attributes with RRDI

I seem to do a lot of “follow-up” posts: something I’m writing triggers something else and things start to multiply. Oh well…

This one is essentially the RRC version of  Digging out RTC custom attributes with RRDI (and Reporting on data with custom attributes).  As part of the workshop I needed to show a simple example of extracting and using a couple of attributes defined for an RRC requirement type. So I first assume that there are two attributes of type Integer named Cost and Multiplier defined for the Features in my RRC project.

I’ve modified a couple of Features to add values for these new attributes so I can get some meaningful data in RRDI.

The RRDI report I would like to generate should show these Features in a format similar to that in the graphic above, with an extra column that is the result of multiplying Cost by Multiplier.

Instead of starting with Query Studio as I did last time round, I’ll go straight to Report Studio and start with an empty List report.

The guts of the report lies in the following 5 queries/joins:

1. Requirements: Extracts the “Requirements ID” and “Name” columns from the Requirements query subject.

 

 

 

 

 

2. CostValues: Extracts the “Requirements ID”, “Extension Type Name” and “Value” columns from the Requirement Integer Extension query subject, with a Detail Filter [Extension Type Name]=’Cost’.

3. ReqsWithCost: A join on Requirements ID between the Requirements and CostValues queries, with the Requirements to CostValues cardinality set to 0..n (outer join). The outer join is important as without it the query will only show those requirements that have a value in the Cost attribute.

 

 

 

 

4. MultiplierValues: Extracts  the “Requirements ID”, “Extension Type Name” and “Value” columns from the Requirement Integer Extension query subject, with a Detail Filter [Extension Type Name]=’Multiplier’.

5. RequirementsCostMultipliers: A join on Requirements ID between the ReqWithCosts and MultiplierValues queries, with the Requirements to MultiplierValues cardinality set to 0..n (outer join). The outer join is important as without it the query will only show those requirements that have a value in the Multiplier attribute.

 

 

 

 

 

Once I’ve made sure I’ve added the Value attributes from both the Cost and Multiplier Extension query subjects to the final RequirementsCostMultipliers join, running the query should give a list of all requirements with values for Cost and Multiplier where available. I’ve sneaked in a Detail Filter on the Requirements query ([Requirement Type]=’Feature’) so only Features are shown.

Now that I have the correct data I add a calculated data item to RequirementsCostMultipliers that multiplies the Cost and Multiplier values.

Running the query shows the result of the calculation where available.

Now all that remains is to clean up the list on the report page to display the required results. I’ve also added a conditional style to the calculated column values to highlight null values and values above and  below a threshold in different colours.

Felt a little complicated but the only slightly complicated bit was working out how to put the queries together.

Digging out RTC custom attributes with RRDI

The canned data model in Rational Reporting for Development Intelligence (RRDI) pulls in most custom attributes added to Rational Team Concert, Rational Quality Manager or Rational Requirements Composer projects in the Operational Data Stores “Extension” query subjects. Here is an example documented in the CLM Infocenter that shows how to extract a custom attribute defined for a Test Case in Rational Quality Manager. While running the CLM 2011 reporting workshop I got asked to show how this can be applied to a custom attribute added to an RTC work item and a custom attribute added to an RRC requirement type. As the workshop is also an exploration of the tools RRDI provides report authors –  Query Studio and Report Studio – I showed two different ways of getting at these custom attributes, at the same time showing off how to use these two tools in concert (pun intended:-).

One easy way to get to those custom attributes in RTC work items is to start with Query Studio, setup a query that extracts the attributes of interest and then use the query in Report Studio to present it with bells and whistles if required. To begin I assume that my RTC “Defect” work item type has a custom string attribute with ID “com.ibm.team.apt.attribute.mystring“.

I’ve also created one or two sample Defects with some text in the custom attribute so I have something to show when creating reports:

I now launch the RRDI interface (aka “Cognos Connection”)from the RTC Reports menu on the web:

Then I start Query Studio from the Launch menu. I just want to create a simple list report that shows the values of the Id, Summary, Status and  “My String Attribute” attributes. First I insert the “Request ID“, “Name” and “Request Status” attributes of the Request query subject and sort by Request ID in descending order (not required, but sorting it this way lets me see my new work items at the top of the list) :

 

 

Next I navigate down to the “Request String Extension” query subject and insert the “Name” and “Value” attributes:

While we’re there, notice the other query subjects ending in “Extension”: there are subjects for most other custom attribute types, including Integer, Long and Large String. The CCM Data dictionary is a valuable resource that shows how custom attributes (and other attributes) are surfaced in RRDI.

Now that I have a simple list I open it in Report Studio for further editing and formatting.

First I’ll make a cosmetic edit and change the “Name2” column heading to “Custom Attribute Name“.

Now (the important bit) I need to filter out all custom attributes except my custom string attribute. Report Studio makes it a breeze to do this. Open the “Query” query for editing and drag the “Custom Attribute Name” data item to the “Detail Filters” pane to display the “Detail Filter Expression” dialog.

Next I place the cursor to the right of [Custom Attribute Name] and double-click the “=” operator (or you could just type it in:-) from the Functions tab.

 

 

 

 

 

 

 

 

 

Let’s say I haven’t the best short term  memory in the world and I’ve forgotten exactly how to spell the id of my custom attribute. Report Studio allows me to be forgetful: I just go to the Data Items tab in the expression editor, select “Custom Attribute Name” and click the “Select Value” icon to have Report Studio show all available custom attribute names and double-click “com.ibm.team.apt.attribute.mystring” to add it to the filter expression which now reads “[Custom Attribute Name]=’com.ibm.team.apt.attribute.mystring'”.

I click OK to add the new expression to the Detail Filter, run the report and presto I have a  (yes, very simple) report that shows my custom attribute.

Obviously both Query Studio and Report Studio can do way more than what I’ve shown here but the object of this exercise was to show one way of getting to custom attributes in RTC.

Next up: a slightly more complex example showing how to dig custom attributes out of RRC.

Disambiguation: Requirements Management Project template vs Requirements Management Process template

I seem to be stuck on this topic a little, but perusing the Jazz.net forums shows I’m not the only one confused!

To recap from my last post , relevant bit copied here to save a mouse click or two (I wonder if that saves any trees?):

“Note that through a quirky inconsistency in Project creation between the three CLM applications, you actually need to log in to the Requirements Project URL (/rm/web)  to be able to use the  predefined RM project templates as described in Creating Requirement projects. Attempting Create Project Area from the RM application’s admin page (as you would with the QM and CCM applications) will only show a “Rational Requirements Server Template” which is a process template as opposed to the out-of-the-box  RM-specific project templates.”

So let me dive a little deeper into Requirements Managements (RM) templates and how Rational Requirements Composer (RRC) uses them.

There are actually two types of templates that I can use with RRC :

  1. a process template:
    This is the Jazz process template wherein are specified things like process roles and permissions. I can administer this using the Web UI by navigating to the Jazz Team Server Home Page and then following the “Manage Project Areas” link in the Requirements Management Section of the Application Administration section.

    RM administration links
    RM Jazz process administration

    I can also get to the same page when already in an RM project (/rm/web) using the Administration menu drop down menu icon:

    RM Administration from RM project
  2. a project template:
    This is an RM-specific template mechanism wherein are specified things like  RM artifact templates, RM artifact types, attributes and data types. I can administer this by using the Administration menu  and selecting Manage Project Properties.
    ManageProject Properties 

So to modify an RRC “project template” and create my own modified version of it, I do the following:

a) create a new RM Project Area based on any of the predefined RM project templates.
b) Customize the RM project as required (create new artifact templatescustomise artifact types, attributes and data types and create new link types).
c) Create a new template from the customised project.

Create New RM Project Template

On the other hand to modify the *process* template (roles, permissions etc.) and create my own version of it I do the following:
a) Export the “Rational Requirements Server Template” from the Templates page of the RM Application Administration.

Export RM Process Template

b)Import the exported template with the Eclipse client into a CCM project area.

Import RM Template in Eclipse Client

c)Modify and save the new Project area’s Jazz process configuration – roles, permissions etc .
c) Export the modified template to an archive file

Export RM template in Eclipse Client

d) Import it from the Templates page of the RM Application Administration Web UI.

Import RM Process Template in Web UI

Now come the tricky bits!

Note that when I use the Administration -> Create Project Area action from the /rm/web page, this action *always* uses the “Rational Requirements Server Template” *process* template, but gives me a choice of *project* templates.

Then note that when I use the “Create Project Area” from the RM application’s Project Area Administration page (*not* the /rm/web one), this action *always* uses the “Base”  *project* template, but gives me a choice of *process* templates (if you used the process above to create your own).

So how can I get to choose *both* process and project templates when creating an RM project?

It appears that the only way to allow a choice of both an RM *process* template AND an RM *project* template is to use a Lifecycle Project Template, which I’ve written about previously. Ignore the Quality Management (and the Change and Configuration Management) part if it isn’t in scope and just use the concepts and procedures described in the other sections.

Happy templating:-)

Creating my own Lifecycle Project template

We were swapping emails over the weekend about Lifecycle Projects (LPs). As in my last post I think Lifecycle Project Administration is pretty good and will only get better over time. However, as was pointed out to me, for folks that want to “keep it simple”, understanding and using LPA properly can be a little painful. This is especially true when transitioning from other “single-purpose” tools to something more extensive and far-reaching in it’s coverage of Lifecycle domains as Rational’s Collaborative Lifecycle Management (CLM) Platform is.

You really need the 3 amigos – Change and Configuration Management (CCM), Requirements Management (RM) and Quality Management (QM) working in concert to realize the true potential of CLM. If however you’re coming from a Quality Management perspective (they said) and you are used to a “pure” testing tool, then working out which of the CCM Process templates “Scrum, OpenUP or Formal Project Management” and which of the RM Project templates (Base, Use Case, Agile Requirements or Traditional Requirements) should be used and how to put them together isn’t necessarily the easiest thing to do: you need to understand what roles exist, what permissions they have, what the work item types are etc.

So I began wondering what a “Quality Management Administrator” (or CCM or RM administrator for that matter) would need to do to create a new LP template that is built to suit his/her organisation rather than just run with the defaults.

Assuming that each new QM project needs to use the services of all three (QM, CCM. RM) applications here’s what I’d need to do to build my own LP template:

1. Build my own Quality Management Process template

a)  Use the Web client to create a QM project area based on the “Quality Management Default v3 Process”
b)Modify the new Project area’s Jazz process configuration – roles, permissions, work items –  I find I tend to use the Web client for just the Presentation aspects of Work Item customization and the Eclipse client for almost everything else. There are some differences in what each client is capable of (in process customization terms) and the Eclipse client is in general more functional than the Web client.
c) Create a new Process template from the customized QM project area. The easiest way to do this is in the Eclipse client: right-click the Project Area and select Extract Process Template. Give it the id  freddy.rqm.process.ibm.com
d) Modify the Test Project properties as required – shared resources, Artifact State Transition Constraints, categories and attributes for several test artifact types,risk profiles and individual risks etc.

2. Build my own Requirements Management template

a) create a new RM Project Area based on any of the predefined RM project templates. Note that through a quirky inconsistency in Project creation between the three CLM applications, you actually need to log in to the Requirements Project URL (/rm/web)  to be able to use the  predefined RM project templates as described in Creating Requirement projects. Attempting Create Project Area from the RM application’s admin page (as you would with the QM and CCM applications) will only show a “Rational Requirements Server Template” which is a process template as opposed to the out-of-the-box  RM-specific project templates.  RM allows a choice of project template but always  uses the “Rational Requirements Server” Process template; unless creating projects through custom LP tempaltes as we’re doing. In which case both the project and process templates can be specified. Confusing huh? For the purposes of this exercise I chose not to make any Jazz process modifications  (roles, permissions etc).

b) Customize the RM project as required. All of the RM customization is carried out through the web client (there is only one:-). You can create new artifact templates, customise artifact types, attributes and data types and create new link types.
c) Create a new template from the project, following the procedure in Creating requirements project templates. Name it “Freddys Requirements Project Template“.

3. Build my own Change and Configuration Management Template

a) Create a new  new CCM Project Area based on the Scrum, OpenUP or Formal Project Management process template.
b)  Modify the new Project area’s Jazz process configuration – roles, permissions, work items (like I did with QM)
c)Create a new Process template from the customized CCM project area. As before in the Eclipse client: right-click the Project Area and select Extract Process Template. Give it the id  freddysformalpm.process.ibm.com .

Ok. So now I have 3 of my very own templates:

freddy.rqm.process.ibm.com
Freddys Requirements Project Template
freddysformalpm.process.ibm.com

4. My very own LP template

The process to get these linked together to give me my very own LP template is as follows (fully detailed at Modifying a predefined template and Importing lifecycle project templates):

1) From the LP Administration -> Templates page (/admin/web/templates) download the “Quality Professional, Analyst, Developer” LP template.
2) Edit the downloaded XML:
-in the Template Description Section replace “rational.alm.integrated.template” with “freddy.alm.integrated.template
– modify “Quality Professional, Analyst, Developer” to “Freddy’s Quality Professional, Analyst, Developer
– in the RRC section replace  “Base” with “Freddys Requirements Project Template
– remove all other RRC project templates
– in the RTC section replace “scrum2.process.ibm.com” with “freddysformalpm.process.ibm.com
– remove all other RTC process templates
– in the RQM section replace “rqm.process.ibm.com.v3” with “freddy.rqm.process.ibm.com
3) Save the modified file as freddy.alm.integrated.template.xml
4) From the LP Administration -> Templates page (/admin/web/templates) page  select Import Template and import freddy.alm.integrated.template.xml

And there it is: When I create a new Lifecycle Project I have my very own template which uses my very own customised application templates. There is however one last step to complete this. Once a Lifecycle project has been created using the new LP template, the new QM project must still be configured with any Test Project properties (see step 1.d) that we added/changed in the original QM project. To do this follow the procedure in Copying project settings to a new project.

So there is a little pain involved in creating custom LP templates. However,  the application level customization effort is independent of LPA and will be required anyway, unless of course the out-of-the-box process/project templates can be used as-is.

Exploring Lifecycle Projects

So in my first post I spoke about lots of coffee and working out how to deploy the Rational solution for Collaborative Lifecycle Management. That post focused on RRC,  Requirements Management and a little on traceability. There were of course other interests including development and quality management  represented in those discussions and before I got to diving into RRC, I needed to understand how the different roles, teams, projects, the artifacts they produced or consumed and the relationships between these constructs.

This is important from the point of view of dictating how the different artifact containers or project areas should be organised, particularly in relation to each other. Kai-Uwe Maetzel talks about “Lifecycle Projects”(henceforth referred to as LPs) in Countdown to CLM 2Q11: Part 1 – Administering Lifecycle Projects and the CLM Infocentre has more details on the various aspects of Lifecycle Project Administration. I like pictures so I went straight to the bottom of Kai’s blog and this image:

Starting out

I figured that what was good for the Jazz development team was good for me too. So I went about setting up a structure that looked similar. That is: a single LP (CLM in the figure above) consisting of (or composed of) three top-level artifact containers (aka Project Areas), one each for Requirements Management, Quality Management and Change & Configuration Management. Here’s what i got :

Starting out: My First Lifecycle Project with three artifact containers

The first thing I noticed was the different relationships (Providers and Users) that were created, based on the life-cycle template I had picked : Quality Professional, Analyst, Developer.

Another initial thought that crossed my mind was that perhaps it was important to give each of the artifact containers names that indicated the application (domain) of origin (QM, RM or CCM). In my case I had two named “Jazz Collaborative ALM”. I put that aside to process later.

Adding members

Now that I had my first LP I wanted to add some members to it, along the way exploring the RoleAassignment Rules (RAR). I downloaded the RAR XML, and added a rule to say

“If a user is assigned the Team Member role in the CCM artifact container, then require that the user be assigned the tester role in the QM artifact container and further require that the user be assigned the Commenter role in the RM artifact container”

The XML looks like this:

 <roleRule>
        <sourceRole id="Team Member" context="#rtc.project"/>
        <targetRole id="tester" context="#rqm.project"/>
        <targetRole id="Commenter" context="#rrc.project"/>
 </roleRule>

Once I uploaded the modified RAR, I then added Bob to the LP and as expected got the following:

Role Assignment Rules

Following the recommendation I gave Bob the appropriate roles and the errors went away.

Role Assignment Rules : No Error

Adding more projects and members: Episode I

So I had a first LP and I found it easy to add members and make sure they were given appropriate roles in related artifact containers. Now I went back to the picture in Kai’s article and noticed all those other project areas like “SVT_RTC“, “RCPR“, and decided I wanted something similar. So I first created a CCM project area called RCPR (at which point my thought on naming things came back but I pushed it aside), then I went to the Jazz Collaborative ALM project area and added a Provides Related Change link Requests association to the RCPR Project Area (artifact container), which shows up as :

Adding Provides association to Project Area

So now I that I had another related artifact container I thought (a momentary lapse of reason?) I could use the LPA to give Bob a role in the RCPR project area. However this isn’t the case since the new project was created and linked without the CLM LP’s ‘knowledge’, and I can only manage members for the project areas created when the LP was originally created. So the Lifecycle Project Membership page doesn’t change:

Lifecycle Poject Membership

Adding more projects and members: Episode II

I then went to the LP’s “main” admin page and added the RCPR as an Additional Artifact Container:

Add additional Container

So I can now “see” the RCPR artifact container in the context of the CLM LP and in the LP Membership page. I can now use the “Add member to All Artifact Containers” action to add Bob to the RCPR project:

Add Member to All Containers

and set his role:

Add New Member Role

Closing thoughts

Naming the containers (here’s that thought finally:-) with something to indicate the container type (CCM, QM or RM domain) may or may not be so important once the relationship types and their endpoints are understood. I was initially confused by the multiple containers named Jazz Collaborative ALM, until I started to examine the relationships and realized that in a Provides Defects to relationship, the source container is a CCM container while it’s namesake at the other end (the target) is a QM container. When the numbers of projects begin to number in the tens or maybe hundreds I would think it preferable to go with the default convention of appending at least a shortened indicator of the container type (domain) to the name.

If I’ve created an LP and decide to add additional containers as an afterthought, I need to manually create any links between the additional containers and the original containers.  The actual warning provided reads : “You are adding an additional artifact container to this life-cycle project. Links will not be automatically established between additional artifact containers and other artifact containers in the life-cycle project. You will need to establish any links that are needed in the artifact container editor.

If I’ve created an LP and decide to add additional containers as an afterthought LPA won’t allow me to pick and choose which additional containers I add a (new) member to – it’s all containers or none.

Going through Episode II, I liked the convenience of being able to manage container membership and role assignments in one place. Managing large numbers (tens,hundreds) of containers in this way may not have been one of the original design intentions, keeping in mind the potential caveat  about the all or nothing membership approach mentioned earlier.  A secondary side-effect also becomes apparent with a closer examination of the membership page:

Horizontal Explosion

Notice that the new RCPR CCM container is added as an additional column. Ergo, with large numbers of projects there would be a horizontal explosion of containers.

The GUI support isn’t there (yet?) for Role assignment Rules and it would be nice to have support for something like RTC‘s Team Advisor “Quick Fix” mechanism (“here’s a problem, and  a potential solution, would you like me to try to fix it?”).

Putting it all together and taken in conjunction with other mechanisms such as user self-registration and default license assignment , LPA makes the life of anybody setting up and managing CLM much easier.

Bottom line: when initially creating LPs, LPA removes the need for a ton of manual work, including setting up the various links between containers and managing memberships and roles.

Losing my documents

Since the release of CLM 2011 much of my time has been spent working out, in my mind or fo customers, how best to go about deploying the Rational solution for Collaborative Lifecycle Management. At one recent engagement, we first spent a few cold, rainy  winter days sitting around, drinking (mostly) coffee and listening to how the team did what they did, what worked, what didn’t, what they would like to improve. We collected lots of notes on various software development practices, roles and artifacts. One of the key themes that was consistently raised was that while the various artifacts may be produced as part of  a given “project” (loosely defined as a number of roles working on a specific set of tasks requiring completion by a certain date), these artifacts would invariably need to be reused in some form in other projects and situations, either as-is or after some modifications.

So we drank more coffee  and talked about artifacts in more detail. Starting at the figurative top of the software development artifact tree : our business analysts are capturing business needs, features, stakeholder requests and requirements in various shapes,forms and sizes, trying to work out what it is that needs doing. Documents (Word, Symphony, Open Office) are usually the preferred “repository” for these important artifacts, with graphic (Visio perhaps) representations thrown in as needed. Once these documents reach a certain stage of “completeness”, they get stored (maybe on a file share somewhere), shipped around (via email, on USB keys, in printed form) for review and/or approval.

The Architect and Systems Analyst community then gets involved, taking the documents that have been produced so far and generate architecture, detailed requirements and design elements, working out some of how the “what” will be done. Again these artifacts are generally produced in the form of documents such as Software Architecture Specifications, Security Architecture Specifications, Web Services Interface Specifications, Detailed Software Specifications and so on. And again file shares, document “repositories”, email, print etc are used to share and collaborate on these documents.

So pausing at this point to consider the implications of all the activities and artifacts being produced it becomes apparent that the threads that tie the different “things” together (Features to Requests, requirements to Design elements ) become a crucial part of the puzzle.  The numerous documents and diagrams are probably excellent in their own right but now there needs to be a way to manage the relationships between them. In many cases this then requires the introduction of a new entity – the Traceability Table. Whether in a spreadsheet or in a document this entity in reality has no right to a life of its own, at least in relation to the overall software development process. In other words the traceability between artifacts, once established, should not need to be managed, maintained and kept up to date as a separate software development task. In some cases we get clever (or so we think) and “hard code” the linking threads in the source or target documents themselves, without a separate traceability table. For example, assuming that we assign unique identifiers to our requirements and the documents they live in, a design specification might have a reference such as “this design element satisfies Requirement X12 as specified in document BRS236-02“.

Great stuff. Well, great stuff until our recalcitrant stakeholder decides that this request here should actually be something else. Or another project over there decides that one of these requirements actually applies to what they’re doing as well. In both cases “someone” needs to wade through a bunch of documents, assuming that they can be found in the first place, work out if they are related to the request at all and then consider what should be done. Should they make a copy of the original document and work with the copy hoping that someone, something somewhere will remember to inform them if the original document changes? But not everything in the original document is even relevant to this other project so maybe they should just create a completely new document, with a facsimile of the requirement there?

Leave that aside for a minute.

Along the way somewhere, depending on your favorite process flavour, the Quality (usually called Test) team needs to begin using some of the artifacts being produced to work out how to verify that the development teams (outsourced or otherwise) are producing what the stakeholders expect, also satisfying the expectations of how they would be produced. Now we talk in terms of Test Plans, Test Cases, Test Scripts. We’ve also added at least another dimension to the relationships that can exist.

All pretty familiar territory to me: around 15 years ago I was using ClearCase hyperlinks to try to maintain the relationships between FameMaker documents and some (magical) automation scripts that parsed tables in those documents in an attempt to not need a separate task assigned to me in the Project Plan called “Update traceability“.

That’s why I’d like to lose my documents: while many of us might find whipping up  documents full of requirements or design elements or stakeholder requests the easiest thing in the world to do, it causes all sorts of headaches.  What we’d like to be doing is treating  requirements, use cases, stakeholder requests, design elements as being different types of “artifacts”, each type having its own characteristics (or attributes).

Which brings me to what got me going on this topic in the first place: Rational Eequirements Composer (RRC). Or more precisely RRC version 3.0.1. As I began creating some of the RRC project framework (artifact types, attributes, link types etc), necessary to get the team started,  I found myself wishing it had been this easy, flexible and intuitive 15 years ago. It’s true that the technologies used to implement some of the features were not around then but there are some fundamental things in the way RRC is constructed to support Requirements capture and management that I liked.

Like removing the unhealthy dependency on ‘documents‘. If we want to keep a document as is, well we can : we just upload it as-is as a “blob”, and it’s just “there” in the RRC repository as something that is a supporting artifact, can be linked to other artifacts, reviewed as a whole, comment on  etc. This is good for documents that , for example, are produced and maintained by some external organisation, and that we can’t or don’t want to break into smaller chunks.

On the other hand, if we would like to allow members of our team to collaborate on,  review, modify the individual requirement artifacts within the document then we can use the Import facility to convert the content into RRC’s rich text format and begin to do really useful things to the content in “chunks” that make sense from a requirements management point of view. Taking chunks and making them artifacts of different types is very, very simple to do: simply put the artifact (remember we don’t have a traditional “document” anymore) in Edit mode, highlight the “chunk” and save the selection as a new artifact, choosing whether to keep the entire selection embedded as-is in context or saved as a link. Of course I can also add other rich text content which I can then convert to embedded or linked requirements artifacts.

One of the key benefits of a CLM solution should be traceability: traceability to other requirements artifacts and traceability to down stream (or “side” stream if you prefer something like a V-model) artifacts like design, development and test artifacts. So if we have all these artifacts floating around in our requirements universe we can start to easily create links to other artifacts, even enable automatic link creation and management, making it easy for requirements driven development across the lifecycle. Once we have such links in place the answers to some of those questions that could only be found in that extra spreadsheet or table I mentioned before, begin to stare us in the face with no additional effort: Have all requirements have been implemented? Are there any defects affecting requirements that need attention? What is the level of test coverage? What is the potential impact of changing a requirement? (This is a nifty add-on that lets me visualise the various links.)

It also becomes natural to go from coarse-grained and unnatural requirements reuse (documents) to very fine-grained reuse of requirements artifacts even across different organisational or functional boundaries. One of the groups  of artifacts that is useful to maintain is a  “glossary” – definitions of commonly used terms. With RRC I can easily select a word or a phrase and Create a new Term from it, and then whenever I use the term it gets replaced with a hyperlink and hovering over it shows the definition of the term. Kind of like the “Translate” button on the Google toolbar, which I used to turn off in annoyance, but find very useful when, for whatever reason, I want to see what a word on a web page translates to in Japanese or Hebrew or French. Seems a simple enough and innocuous little feature. Now extend it as RRC does to allow “terms” to be of any other of the artifact types and this then becomes the basis for an organisation wide “data dictionary”. For example if an “author” is an important, oft used actor I can create an Actor called “Author” (that may include various actor-related attribute values and a rich text definition), create a term from it and wherever it is important that the meaning of “author” be made clear, I can reference the term. This removes the potential for ambiguity and therefore misunderstanding that is inherent in most languages. (Apparently the word set has the most definitions in the English language – 464).

It doesn’t take a huge stretch of the imagination to extend the “glossary term” reusability model to other requirements artifacts: define a Requirements Management artifact container at the organisational level and populate it with a whole bunch of common artifacts and (subtly different) commonly used artifacts. Then over time as the need arises (new projects starting up, new releases of existing projects etc) we don’t need to hire a PI to go out and find those artifacts we need and, equally as important, the relationships they have, how, when and why they changed. All we need is perhaps a vague memory of a word or two that might have been part of the artifact and the excellent little search field in RRC becomes our best friend forever. As we go about creating new artifacts, those that have not changed but need to be repeated can simply be inserted or embedded, without much thought to where they actually live. From a reuse point of view, what would be a nice additional extension is being able to copy or move requirements artifacts across project areas, for those cases where only a small part of the artifact is in fact reusable (copy) or where we would like the artifact to be “owned” by a specific project area rather than the common one (move).

One other problem we found during our coffee drinking sessions  mentioned previously is getting the different stakeholders to contribute and collaborate on requirements artifacts, easily and in the context of the affected artifacts. Printing stuff out for review meetings, then collecting meeting minutes in (more!) documents or email exchanges are often the norm here and again one flaw is that there is a disconnect between these media and the things they relate to. (I won’t get into that other flaw: printing kills trees). RRC allows us create formal or informal reviews on all types of artifacts or even meaningful collections of artifacts. Adding participants to these reviews then sends out notifications (on dashboards or email) which they then respond to and have their say. The good thing about this is that all this important detail about how the requirement got to where it’s at is captured in the RRC repository for analysis and use.

So I squeezed in a few things that I think are useful and easy in RRC, though there’s a heap of other stuff that go a long way in making the Requirements Elicitation and Management process simpler. I’d toyed very,very briefly with RRC prior to the release of version 3.0.1, but shied away from diving deeper mostly because of the need to install a separate client for it. As I’m sure many of us do, I find using a web browser a very “natural” thing to do these days and so once I had the RRC v 3.0.1 browser add-ons installed, it didn’t take me long to get stuck into it.