Thursday, September 19, 2013

Helping Maintainers

There are two broad categories of documentation which exist and are or can be maintained outside the code. 

         These can broadly be defined as User Help/Specifications and Design/Architectural Descriptions. This blog posting is about Design/Architectural Descriptions.  A previous posting covered User Help/Specifications.


         This type of documentation is intended for use by maintainers and to a lesser extent by project stake holders.  From a maintainers perspective, this type of documentation is useful only if it leads to a quicker (than just reading the code) understanding of what the major software and hardware elements are and how they relate to one another.


         The characteristics of  software architecture documents are:

    1. Highest level of abstraction
    2. Focuses on the components needed to meet requirements
    3. Describes the framework/environment for design elements
    4. No implementation information

         The characteristics of software design documents are:

    1. Lowest level of abstraction
    2. Focuses on the functionality needed to meet requirements
    3. May only hint about implementation


As appropriate it should have diagrams and/or descriptions of the major modules and how they relate to each other both structurally and at run time ( data and/or control flow relationships ).
       
          For large and structurally complex applications which have been written in an object oriented language, there are tools which will generate UML class diagrams of the modules similar to the following.


For maintenance purposes on large/complex systems I have found these diagrams to be useful only as a starting point for checking references to a method as part of a proposed defect fix.  I say starting point because it mainly serves as a way to estimate potential effort.  More times than I care to think about, I have written a Perl script to scan all files in a project for a given symbol or set of symbols.  The Linux grep command can be used to do much the same thing.  Both of these scans can also check external documentation files. The virtue of the Perl script is that it can be easily modified to make the desired change to the code and comments.

I am not saying that auto generated UML diagrams are useless, only that they should not be the only external documentation for a project intended for maintainers.  Consider who the maintainers might be.  It may be that not all maintainers under stand UML diagrams.  It is possible that some maintainers will have expert specific domain knowledge but not much software expertise.

Consider management type people .....



Friday, September 13, 2013

Maintainable Software, Adventure 2

This adventure in maintainable software is included because some very stringent requirements and a couple of significant requirements which came in late in the project lead to a highly maintainable architecture/design.  This was a firmware project for what was called a BootROM in an embedded system.  One aspect of this type of application is that unless you count the system code which is loaded, there is no application data but some configuration data.  

The restrictions and requirements were roughly as follows:
o   The system buss was VME which is/was the standard for large embedded systems
o   The RAM available for firmware use was very limited.  There was enough for roughly 3 stack frames as generated by the C compiler which limited the calling depth
o   There was no practical limit on ROM address space
o   There was a good amount of flash memory for configuration data
o   It required using three languages (assembler, C and Fourth)
o   It must support simultaneously an arbitrary number (0 to n) of Human Interface devices (keyboards, displays, and RS-232 ports.  Late in the project was added remote console (LAN) interfaces.  All of these interfaces were to remain active even while the OS/user system was being loaded
o   It must be capable of finding an arbitrary number of bootable systems on each of an arbitrary number of bootable devices (disks and remote (LAN) servers)
o   It must be capable of accepting input from keyboards configured for any of dozens of human languages and displaying messages in those languages.
o   It must be capable of running diagnostics for any user supplied device which appeared on the VME bus.
o   Late in the project a requirement was given to support what was known as hard real time.  What this mostly meant was that the system as a whole must be capable of going from power on to user code running under Linux in under 30 seconds. What this required of the firmware was to ignore HI devices and to disable all diagnostic code except for devices directly involved in the boot process
o   Oh! And if possible allow users to modify the code

The requirement for Forth came about because at the time there was a push for a way to allow device interfaces to contain their own driver code.  This was to be the way in which interface testing and boot support was to be supplied so that firmware did not have to be constantly updated for new or changed devices.  A group of manufactures and suppliers including, Hewlett-Packard and Apple had come to an agreement that Forth would be the language used to implement this capability.  Also at this time a IEEE standard for Forth was accepted. 

The problem of supporting keyboards for any language was solved by using a pure menu based human interface system where menu items were selected by keying a number.  This worked because all keyboards have the same key position for the numbers (excluding number pads).

A breakthrough solution for the stack depth problem came when I realized that if code was broken up into small functions that a state-machine could be used to represent calling sequences and the stack depth could then be limited to a single level.  A refinement of this design was to limit the number of function arguments to three because the C compiler always put the first three arguments into registers rather than on the calling stack thus further reducing the stack frame size.  The state machine description could be stored in ROM. 

The rough process used to create such a representation was to translate each subroutine into a state machine representation by refactoring each straight line code segment (code between routine calls) into a function and replacing each subroutine/function call with the state machine representation of that routine.  This was rather laborious but fortunately most routines were small and straight line.
The following is a simple abstract example of this flattening and state machine representation:

‘Normal version’ for simplicity it is assumed that only sub2 calls another routine.
subroutine A(arg1)
    code seg1
    v1 = func1(arg1)
    if (v1=1) call sub2(arg1) else call sub3(v1)
    code seg2
    call sub4(v2)
end sub A
subroutine sub2(argx)
     code seg3
     call sub5
end sub2

‘State machine version’
Code segments 1, 2, 3 etc. are refactored as functions.   Most often the return value is a 0 or 1 indicating success or failure.
Rv is the return value while Arg1 and V1 are a ‘global’ variables
StateA1
     Rv=seg1();  stateA2
StateA2
     Rv=func1(Arg1) ;  1,stateS2.0; 0,stateA3
StateA3
     Rv=Sub3(V1); 1,stateA4
StateA4
     Rv=seg2(); 1,stateA4
StateA5
     Rv=Sub5(); 0
The 0 or here indicates completion for the machine
State S2.0 // this implements sub2
     Rv=seg3(); state S2.1
State S2.1
     Rv=Sub4(); stateA4 // this is the return from sub2

Another innovation was in the state-machine representation and its ‘interpreter’.  In conventional state-machines, the ‘interpreter’ operates by calling a routine which returns an value (often a character from an input source) this value along with the number of the current state is used to index into a two dimensional array to fetch the number of the next current state, this number is also used to select and call a routine which is the action associated with the state.  This process is repeated until some final state is reached. 

The main problem with this design is that as the number of states increases the number of empty or can’t get there values in the state table increases along with the difficulty in understanding what the machine is doing.  Documentation can help but must be changed every time the state machine is changed.  The solution I came up with was to represent the state-machine as a table of constants in assembler language and use addresses instead of index numbers.  Each state has a name and contains the address of the function to call, its arguments and a list of value-next state pairs.  The following examples and explanations may be a bit tedious for some people but they form the basis for my solution to the multiple HI devices problem as well as solving the stack size problem.

In this example the dc stands for Define Constant and the -* causes a self-relative address to be produced, this will be explained later
stateX1     dc ‘op1 ‘ //arg1
            dc 1      //arg2
dc 0      //arg3  not used
            dc myfunc-*
            dc value1,stateX1-*  // repeat stateX1
            dc value2,stateX2-*
            dc 0,0 // end of list

The calling a routine for an input value was eliminated and the ‘interpreter’ ran by using the current state address to select the function to call but then used the return value from that call to select the next state from a list tailored to that functions return values.  This minimizes the amount of space to represent the state machine and makes it somewhat easier to follow a sequence of transitions.

The requirement to support simultaneous use of multiple HI devices even while the OS/system is being loaded provided a significant challenge because there was not enough available RAM as well as other issues which prevented using an interrupt based mechanism like the Operating Systems do.  Firmware had to use a polling mechanism which meant that the user ‘lost control’ while the OS was being loaded as well as at other times.  The requirement to support a LAN based console was over the top because driver code in general had never been implemented or even designed to be used for two different purposes at the same time.

The multiple simultaneous HI device ‘problem’ was solved primarily by changing the state machine to support a cooperative multi-threading mechanism and then creating a thread for each input device and a separate thread for the selected boot device.  Each input device thread was a two state state-machine which called the driver requesting a character.  If one was not found, the next state is the same as the current state and when one was read, the next state called a function which invoked the menu system then transitioned to the first state.

This worked because the state-machine interpreter was modified to work with a list of state machine records.  I do not remember the details of how it was implemented but even now I can think of at least three ways it could have been done.  The essence is that the interpreter would pop the next state address off a list and execute the state selected.  After executing the state, new next state address was put on the end of the list and the cycle would be repeated. The effect of this was to switch threads on state boundaries and therefore after each function call.  The LAN driver was converted to state-machine format and refactored to have both boot protocol and console protocol components thus allowing the console and boot operations to work simultaneously.

At the time of this design, there was great interest in using ‘building blocks’ as a way to construct easy to build and maintain systems.  No one had as yet actually tried or even come up with a concrete proposal.  Because we anticipated a huge maintenance work load after the first system release I decided to try to come up with something for this project.




                  The resulting architecture initially consisted of a ‘core’ block an English language block, a set of               driver blocks and an end block. 

                                                
The Core block contained normal BootROM stuff, including basic initialization, the Forth Interpreter and various general utility routines which could be used by all blocks.  The End block was nothing more than a set of null pointers to terminate all the lists.  There were in fact several more linked lists than are shown in the above diagram.  All of these links were self-relative pointers and were paired with a bit pattern which indicated if there was anything of interest at that address for example text for English or code for a human input device.  Each block relied on the assumption that there would be a set of pointers immediately following its own code.   Each block was compiler/assembled independent of one another. The ‘ROM’ was constructed by extracting the code from compiled files and literally concatenating them between the Core block and the End block.  This architecture required code and addresses such as the ones in the state-machine tables and the lists running thru the blocks to be self-relocating.  The compiler generated self-relocating code.  The self-relocating requirement for addresses was met by making them self-relative (add the value at an address to its address to produce the required needed address. The null pointer was detected by checking for zero before the addition step.

The ‘multiple human languages’ requirement was met in part by the HI input device mechanism described above and in part by using a pure menu system and in part by the following mechanism.

All of the text for all human visible messages was placed in a table and referred to by a number.  This text consisted of phrases and single words which could be combined as needed. Messages as stored in the table could be either text or a string of message numbers.  When a message needed to be displayed, a call would be made to a print routine in the Core block and giving it the message number.  This routine would find the language table for the currently configured language,  construct the text version of the message then find all the console output devices and have them display the message.
These messages and message fragments were developed by Human Factors specialists.  I developed a simple emulator (in Perl) which would behave like the ROM in terms of how the menu system worked and how it would react to error conditions.  These specialists then had complete control over the message text.  This allowed parallel development of code and user manuals.  When the time came to deliver the ROM,  I used a Perl script to extract the messages and generate the English language block.  The same or very similar process was used to generate the French language block.
The last requirement (“Oh! And if possible allow users to modify the code”) was put in place by creating a developer’s kit which consisted of a lot of documentation, the ready to concatenate files used in the ‘standard’ ROM and the scripts used to create them.
Historical notes:
o   Shortly after the first release of this ROM, Apple came out with the first device which was to support the Forth language concept.  Instead of Forth, their ROM contained machine code.  Their comment was ‘we never intended to support Forth’.  That killed the initiative.
o   The system this ROM was part of was designed to work in hostile/demanding environments primarily for data collection/management purposes.  As I recall, it was used in monitoring tracks for the Japanese bullet trains, doing something in the latest French tanks, monitoring and controlling sewage treatment plants, collecting and displaying data for an Air Traffic control system and collecting and displaying data in the US Navy Aegis weapon systems.
o   I did train engineers for one company to allow them to build their own ROMs.
o   About a year after releasing this project, the division was disbanded but maintenance was transferred to another division.  About five years after this I talked with an engineer who had been assigned to maintaining the ROM and he said that he had been maintaining it by himself for a few years and had released new modules and had easily made several releases per year by himself.

Summary of things done and learned:
o   Used Assembler, Forth and C in the same application.
o   Designed and implemented what may be the only true ‘building block’ system.
o   Learned that separating control logic from computational logic helps maintainability.
o   State-machines are a viable way to keep control and computational logic separate and make the control logic very visible.
o   Simple emulators can be used to see what an application will look and behave like before and while the application is being built.  They can be cost effective because the allow scenario testing before the code becomes too rigid.
o   Learned a lot about multi-threaded systems work in a very controlled environment
o   The building block architecture proved to be a very good way to improve maintainability



Monday, September 2, 2013

Maintainable Software, Adventure 1

This is the first in a series of at least three projects which most strongly influenced my views of software maintenance.  I have titled them Adventures in Software Maintenance.

  Denison Mines Open pit mining, washing plant etc.

The adventure here is that until about half way through the project (several months) I did not have anything close to a full set of requirements.  The requirements were given one small part at a time with coding and testing done before the next requirement was given.

While working for Computer Sciences Canada in the late 1970s, I was given the assignment to be a consultant and contract programmer for a company then known as Denison Mines. At the first meeting the man who was to be my manager at Denison Mines told me that he knew very little about computers but was confident that they could solve his problems.

The first assignment was in essence to create a ‘graph’ of a curve with two inflection points and to do this for several hundred data sets which had only two data points.  I of course having studied such things in college told him it could not be done.  He said we were going to do it anyway because each pair of data points cost several thousand dollars to produce.  We did it in a few months by using a few complete data sets, some other empirical data and the help of a Japanese mining engineer they brought in from japan.

Over the course of several months the manager kept adding other things which needed to be done and added to the growing program, such as adding a mathematical model for a coal washing plant, a mathematical model of the geology of a few mountains and algorithms for designing open pit mines.

After a few visits it became obvious that this incremental requirement discovery process was going to continue and it was going to be impractical to keep re-writing the program. The solution was to design a simple command reader which would read a command (from a file) then make the appropriate subroutine calls.  There was too much data to pass into the subroutines so the application data was read into global structures and the routines manipulated the data in those structures.

Looking back on this project from a maintainability perspective, the most volatile program area was in the command interpreter where new commands were added.  This worked because the highest level flow control was shifted outside of the program and into the command file (there were no conditional or looping statements in this language). The second most volatile area was the data structures.  Changes were made to array sizes or to add new arrays and entries in the common blocks or new common blocks.  In more modern languages the changes to the common blocks would be to add fields in a record or data only module. The least volatile area was the computational subroutines or algorithmic code as I now call it.  The changes most often were to add routines to execute new commands.

One side effect of the command stream architecture was that depth of subroutine/function calls was very shallow, seldom going more than a couple of levels.  This in turn made it easier to test new code.

The result of this design/architecture was that very little maintenance work was done or needed even after several years.  A historical foot note is that this program was used to design two major open pit mines and the associated coal washing plants. For more information about this see http://en.wikipedia.org/wiki/Tumbler_Ridge,_British_Columbia and this http://www.windsorstar.com/cms/binary/6531189.jpg?size=640x420 and this http://www.ridgesentinel.com/wp-content/uploads/2012/04/WEB-04-April-27-Quintette-and-PH.jpg  

Summary of things done and learned

o   Language: FORTRAN using block common (data only single instance records)
o   The value of validating (e.g. range checking) application data as it is read
§  This allowed the algorithmic code to be as simple as possible by focusing exclusively on processing the data
o   The value of using a user input stream for top level control
§  This reduced the effort required to test new code by allowing execution of new subroutines to be isolated from existing code
§  This enabled changes caused by the introduction of new functionality to be very localized within the program. Most changes were limited to the new subroutines and a simple addition to the command interpreter.
o   The value of separating control and computational (algorithmic) code
§  This was mostly a side effect of the command stream architecture
o   The value of storing application data in global space
§  Because most of the data was stored in arrays, this allowed a kind of ‘pipeline’ processing and all but eliminated the passing data up and down the calling stack.  The trade-off is that more than ordinary attention must be paid to how data is organized and that subroutines only work on very specific data elements.


The application data naturally occurred in one and two dimensional arrays. The raw geological data was presented as a set of vectors for each drill hole, a location followed by a vector of numbers representing what was found at varying depths. This data could then be filtered and processed to produce ‘topology’ maps or simplified arrays representing seams of coal which had specific qualities which could be written to a file for later processing and/or producing various reports.  In all cases original input data was never altered.

Saturday, August 24, 2013

Helping Users

There are two broad categories of documentation which exist and are or can be maintained outside the code. 

         These can broadly be defined as User Help/Specifications and Design/Architectural Descriptions. This blog posting is about User Help/Specifications.  Originally the two were intended to be in the same posting but because of size and some work I want to do for the Design/Architectural posting, they have been broken up.

          This type of documentation is primarily intended to be read and used by end users. It most commonly takes the form of ‘Help’ information presented at run time. The 'Specification' part most often takes the form of manuals but that is not a topic for this blog.  The distinction between run time help and manuals is sometimes blurred.  A simple example of this type of blurring is when a description of the format of an input/output file is needed.  

          The first question I ask is "does the user need to know this before the program starts?".  If the answer is yes then a manual is needed.  The second question is "does the information take more than half a screen height to display?".  If the answer is yes then a manual may be needed or a more complex display system such as a web page is needed.

          This topic is part of maintainable software because it is often part of a maintainer’s job to keep it accurate and it also serves as a reminder of what the application software is supposed to be doing and not doing. The run time 'Help' information is typically the most volatile aspect of maintaining software.  But if implemented with some forethought it can be done without requiring code changes thus avoiding the overhead of a formal release.

          The display of help information is usually triggered via a command line option which causes the program to display the information or switches control to a web page which presents the information.
In cases where there is no such command line option the program may display a line with the program name its version id(number) and possibly a statement of what the program is supposed to do.

          The choices of where to put this type of information are simple, build it into the software or put it in files outside the program.  Building it inside the software works quite well for script languages such as Perl but not so much for compiled languages such as C or C++ .  If the information must be available for web or manual publication then the only practical choice is to put it into file(s).

          One of the most successful methods I have used to minimize the maintenance effort is to put the information in topic specific files then use Doxigen to format and organize the files for program and web use. This means that there is only one copy of the text to maintain and that the code is not affected by changes to the content.  This is an example of the WORM (Write Once Read Many) principle, which basically says that a piece of information should only be stored in one place no matter how many references to it exist.

          The above pattern can be implemented by a simple encapsulation function which allows the calling code request the display of a specific topic (string) and that the only thing the calling code may need to know is if the display was done properly or not (return value) (e.g. the information was available and displayed). 

          Maintainability can be enhanced by having the encapsulating routine translate the topic name (as supplied by a user) converted to a file name by simply adding a suffix and/or perhaps prefixing the program name. This mechanism allows the addition of help topics by only changing the help files. This assumes that the command line description is also contained in a file.  A variation on this is to place the topic name in a URL which is then sent to a browser for display and handling.

          A related aspect of maintaining information a user might want to know is maintaining a programs version id/number.  At one point in my career I designed and implemented a subversion based system which among other things automated the checkout, testing and release of numerous programs.  A part of this was a mechanism to automate the update of version ids for these programs (independent of the implementing language of the program) and support for all types of external documentation.  The premise was that if it is easy to do the right thing (including maintaining documentation) then maintainer’s even engineers in a hurry would do it.  This has proved to be true, even years later.

          In my opinion, one of the best examples of a good command based help system coupled with a manual is the one implemented by subversion red book (http://svnbook.red-bean.com/en/1.7/svn-book.pdf) and the svn command.  I have used this as a model for several applications.  I have also designed/implemented applications which used web pages to present the help information.  All of these applications were designed with the goal of not having to make code changes just to update the help information. 

Wednesday, August 21, 2013

Comments on comments

Comments on Comments

I have been around long enough to realize that there are as many sets of commenting guidelines as there are organizations which produce software and that the rules will change over time.  Here are my personal thoughts and opinions on this subject mostly from a maintainers perspective.

The most important thing is to give information which will quickly let a maintainer know what the code/method/subroutine/module is supposed to do.

The simplest type of documentation is to use symbol names ( variables, subroutines, modules etc ) which convey an idea of what the item is used for.  I once thought this was not particularly important until I was asked to make a few changes to a program where the comments and code were written by a French programmer.   I also once had a very inventive colleague who chose names and used comments which made the code read something like a novel.  The code was very memorable but very difficult to debug or make rational changes to.

A exception to the above rule is the use of variables named I, J, K etc as counter or index variables which are only used as a counter or index into a data structure and no meaning at all outside the loop.

It is not a good thing to describe in great detail what the code is doing.  This is typically not very helpful and when the code is changed for any reason,  the comments will need to be updated also thus doubling the required effort ( it might be better to just remove the offending comments ).  The worst thing would be to update the code but not the comments because a poor maintainer will not know if the comments are correct or the code.

The simplest type of comment is appended to a line of code. This at least  has the virtue of being removed if the code line is removed.

One of the most useless comments looks like the following:
     A = B;  // assign B to A
A better comment would be to describe why the assignment is needed e.g.
    // Preserve parameter B for later restoration

If for some reason you feel compelled to place a comment for every line of code please do it a the end of the line so that a maintainer does not have to skip every other line while trying to figure out what the code is actually doing.
   getParams(args);         // put command line parameters into the global space

For code blocks, a simple one or two line comment just ahead of or a the end of a block can be very helpful.
  // Initialize all framework global data areas
    getParams(args);
    getProperties();
    getEnvVariables();
 // end of initialization

The above is a very simplistic example and the comments may be not be needed but:
  • They should never need to be changed ( i.e. no maintenance required )
  • They very effectively mark where any initialization changes should be made
For larger blocks of code e.g. modules and subroutines it can be effective to create off a block of comments before coding begins which at least describe the code blocks in the order in which they appear with TBD (to be done) in the places where code is supposed to go.  This serves as a guide during code implementation and encourages simple changes to maintain accuracy.  The TBD or some thing similar can be quickly recognized as a place where code is incomplete.

Many IDEs such as eclipse support comment generation for various type of code blocks -- use them.

Comments should be brief and describe at least what is being done and why not how it is being done.
All rules have exceptions including this one.

An exception to the brief rule is for library routines written in script type languages.  In this situation, the comments near the front of the code also serve as the user documentation and of necessity should contain enough information to allow proper use of the code.

An exception to the don't explain how rule should be made for blocks of very dense or intricate logic.  In these cases the comments should explain how the code accomplishes the stated purpose of the code.

Another type of comment I have seen many times reads something like:
// Fixes bug 234
Unless the the bug report data base is readily available, the only purpose this serves is to raise a flag if there are several such comments in a given block of code.  This situation should cause serious consideration to be given to a plan to re-implement the affected functionality.  I stated it the way I did because it may not be a problem with the flagged code but that the calling code is using the this code incorrectly or with incorrect expectations.

To re-iterate, the most important thing is to give information which will quickly let a maintainer know what the code/method/subroutine/module is supposed to do.

Saturday, August 17, 2013

Maintainable Software Overview

The purpose of this blog is to promote the creation of software and firmware which is designed to be maintained.

The focus is on ways and means of creating software which are as independent as possible of any development environment/eco system.  That is to say it will focus on aspects of software architecture, design and coding practices which will maximize maintainability.

All of the information presented here is based on forty plus years of personal experience as a software engineer and anecdotal material from co-workers.    The material presented is expected to be applied where practical and with at least a good understanding of why it does or does not apply to a specific situation.

My experience has been primarily with main-frame and workstation class computer systems.    Programs requiring a Graphical User Interface and Web based applications have not been considered primarily due to insufficient experience.  Non application software (device drivers, file systems and midlevel network code, language compilers etc.) is also not specifically considered primarily because often the need for speed or severe constraints on RAM and/or other resources override some of the recommendations.

I have defined maintainability as:  a measure of the effort required to change the functionality of application software.  A measure of ‘effort’ must include time, resources and expertise.

In general any software development manager is familiar with this definition of ‘effort’ as it applies to creating software.  The term ‘change the functionality’ applies to both enhancements as well as bug fixes.   It might also be said that maintainable code is designed to be leveraged.

Maintainability is related to several other “ilities” such as

  •  Flexibility:  The ability to work with un-anticipated data/conditions without code changes.
  • Portability:  The ability to operate in environments other than the one originally deployed in.
  •  Reliability:  The ability to operate correctly in-spite of failures in the programs environment or inconsistencies in the supplied data.
  •  Reusability:  The ability to use code in a different application without modification.  It could be said that this is the ultimate goal of maintainability.
    A program has been described as being composed of data and algorithms.  I also add control or control flow.  I use the term ‘algorithm’ to refer to any section of code which manipulates application data.  The term ‘control’ refers to code which determines which algorithms and/or the order in which algorithms are executed.  It can therefore be stated that ‘data + algorithms + control = program’.
Experience has shown that the areas of volatility from greatest to least are:

  1.  Control
  2.  Algorithms
  3.  Application data.  Note the qualification on data. 
What this order means is that the effort to improve maintainability will be most effective when applied firstly to flow control then algorithms then application data.  Experience has shown that during the architecture and design phases of creating a program the order of importance is generally reversed.  That is to say that the most important thing to understand is the nature of the data to be processed,  then the algorithms (code) which will be require to process the data and then the conditions under which and the order in which the algorithms will be applied.

In keeping with the above definition of a program, the associated postings are generally organized into the following broad topics.  Each topic covers associated maintainability problems and ideas on to how to mitigate them.

  • Documentation
    • This is something of an anomaly in that it can be the simplest and the least volatile as well as the most difficult to do well and the most volatile.
  •  Data
    • This covers application data, control data and parameters.
  •  Algorithms
    • This mostly covers encapsulation considerations and is confined to application specific data manipulation.
  • Control
    • This covers control patterns and strategies
      • Simple Controls (e.g. if, loop and case)
      • State machines
      • Simplified AI machines
  • Multi-threaded Applications (might be considered a subset of Control)
  • Mutlitasking
    • Cooperative multitasking
    • Interrupt driven multitasking
  • Considerations for Object Oriented and non-Object Oriented designs