Friday, 23 September 2011

Differences between HLD and LLD


Differences betweem High level design and low level design :
HLD is a overall system design where as LLD defines the actual logic for each and every component in the system
Data flow, flow charts and data structures are covered under HLD
Class diagrams with all the methods and relation between classes comes under LLD. Programs specs are covered under LLD
For HLD
Entry Criteria : SRS
Exit Criteria   : HLD, projects standards, the functional design documents, and the database design document
For LLD
Entry Criteria : HLD
Exit Criteria   : program specification and unit test plan (LLD)

HLD – High Level Design



A high-level design provides an overview of a solution, platform, system, product, service, or process. Such an overview is important in a multi-project development to make sure that each supporting component design will be compatible with its neighboring designs and with the big picture.

The highest level solution design should briefly describe all platforms, systems, products, services and processes that it depends upon and include any important changes that need to be made to them.

A high-level design document will usually include a high-level architecture diagram depicting the components, interfaces and networks that need to be further specified or developed. The document may also depict or otherwise refer to work flows and/or data flows between component systems.

In addition, there should be brief consideration of all significant commercial, legal, environmental, security, safety and technical risks, issues and assumptions. The idea is to mention every work area briefly, clearly delegating the ownership of more detailed design activity whilst also encouraging effective collaboration between the various project teams.

Today, most high-level designs require contributions from a number of experts, representing many distinct professional disciplines. Finally, every type of end-user should be identified in the high-level design and each contributing design should give due consideration to customer experience

High Level Design (HLD) is the overall system design - covering the system architecture and database design. It describes the relation between various modules and functions of the system. Data flow, flow charts and data structures are covered under HLD.

Low Level Design (LLD) is like detailing the HLD. It defines the actual logic for each and every component of the system. Class diagrams with all the methods and relation between classes comes under LLD. Programs specs are covered under LLD.

 
High Level Software Design

High level software design, also called software architecture is the first step to analyze and consider all requirements for a software and attempt to define a structure which is able to fulfill them. For this also the non-functional requirements have to be considered, such as scalability, portability and maintainability. This first design step has to be more or less independent of a programming language. This is not always 100% possible, but a good high level design can be further refined into a low level design, which then describes the implementation in any desired programming language.

The Architecture

The first step in designing software is to define the architecture. Simply speaking this is a very high level outline of the components and layers of software. There may be some requirements which explicitly ask for some of the below named features, but even if there are no explicit requirements it is a good design style to adhere to these principles:

Design layers which at least make the functional part of the software independent of a hardware platform. Any specialties which have to be covered when running on a certain platform have to be in a special hardware abstraction layer.
  •  Design an adaptation layer to adapt to a special operating system (if necessary). Operating systems may offer services and semaphores, but never use them directly in your functional software. Define your own services and semaphores and go through an adaptation layer to adapt them to the operating system 
  • Design any additional layers inside your functional software as appropriate.
  • Design components inside your functional software. Depending on your requirements and future strategies it may be wise to e.g. design communication protocol components in a way that they can be easily removed and replaced by another protocol, to adapt to different platforms and systems. E.g. in the automotive industry the CAN bus is widely used for communication between the various electronic systems in a vehicle. However some customers require different proprietary protocols. Your software can be designed in a way to modify the protocols easily. Almost as easy as "plug and play", if the design is done properly.
  • Design an own framework which controls the calls and interactions of your functional software.
  • Design an own framework which controls the calls and interactions of your functional software.

Of course this was only a very rough outline of how architecture may look like and what we found to be the best way for many applications. However, your own system may require some additional features of the architecture. 

Object Orientation

Object orientation is nowadays usually associated with certain design methods like UML and programming languages like C++ or Java. However the principles of object orientation were developed long before these methods and programming languages were invented. The first steps of object oriented design were done by using C. And indeed object orientation is a principle of design rather than a tool or method based feature. Some of these principles are:
  •   Orientation of program components at the real physical world. This means that the division of a software package is done according to the real outside world of a system and according to main internal tasks of such a system.
  • Combining of all elements of software (i.e. data, definitions and procedures) into an object. This means that everything that is needed to handle an element of the system is grouped and contained in one object
  • Access to the object's data and functions via a clearly defined narrow interface and encapsulation of elements which are not required for the outside world to be hidden in the object.


Interfaces

The design of your interfaces is another element which adds to the stability, portability and maintainability of your software. The following things have to be observed:
  1. Only use function calls as interfaces and refrain from using memory pools or any other globally shared elements as interface.
  2. Make your interfaces as narrow as possible. I.e. use simple basic data types rather than complicated proprietary structures at the interfaces. It is sometimes amazing how simple interfaces can be if the functionality is distributed in a sensible way in appropriate components.
  3. Preferably make your interfaces uni-directional. This means that higher level components acquire data from lower level components and layers. Try to avoid bidirectional interaction between the same components.
  4. Describe your interfaces clearly. Even at a high level design the kind of information, the data width, resolution and sign has to be determined. This can be done on an abstract level i.e. there is no need to define them in terms of data types of a special programming language.
More details will be given in the description of low level design related to the programming language C.

Operating Systems and Timing

Basically there are two categories of microcontroller systems. The first one is EVENT driven, as e.g. cell phones and other modern communication equipment.

The other kind of application is TIME driven. These microcontroller systems usually have the task to measure and evaluate signals and react on this information accordingly. This measuring activity means that signal sampling has to be performed. Additionally there may be activities like feedback current controls which have to be performed. Both activities imply by the underlying theories that sample frequencies have to be as exact as possible.

Both categories of systems are called REALTIME SYSTEMS, but they are like two different worlds!

         The EVENT driven systems are comparatively simple. The are usually in an idle state until one of the defined events triggers a task or process, which is executed sequentially until it is finished and the system returns to the idle state. During the execution of such a task these systems usually do not react on other events. This "first comes first serves" principle e.g. can be seen in a cell phone, where incoming calls are ignored after you started to dial an outgoing call.

          TIME driven systems are much more complicated. Usually all possible inputs to the system have to be sampled and all outputs have to be served virtually simultaneously. This means that time slices have to be granted to the various activities and their duration has to be defined and limited to ensure the overall function of the system. It would be too much to go into more details here. However there are some general rules which should be considered:

The CPU selection should be made according to the application. There are some CPUs which support time driven application in an optimized way. E.g. it is recommendable to have sufficient interrupt levels, CAPCOM units and I/O ports which can be accessed without delays for time driven applications. It is sad to see that in recent years some well known microcontrollers which originate in the event driven world were pushed into the time driven market. The ease of development and stability of the systems suffer from this. Although the attempt was made by the CPU manufacturer to cover for that by designing suitable peripheral units, the whole situation still looks like an odd patchwork rather than a sound design.Operating systems can be event driven and non-preemptive for EVENT driven applications.Operating systems should be time driven and preemptive for TIME driven systems.Standard operating systems may fail you for high speed tasks, such as 250 us tasks for feedback current controls. In this case you have to do this by tricks and timer interrupts outside of the operating system. Therefore have a close look at the OS from the shelf before you base your system on it.

A Word about Modern Design Methods and Tools

Most of our ideas concerning high level design were outlined above. To some this may look outdated, because there are so many design tools around which promise to make life easy. However there are a few things we want to point out concerning modern high level design methods and tools.

           There would be a lot to say about modern design methods, such as UML, SDL and others. But there are a lot of good web-sites which specialized on this. Tools are available which allow high level design in one of these methods and even make an automatic code generation. Praises are sung that these 4th level "programming languages" will be the future way of designing software and leave the skill of coding to an automatic code export triggered by a simple hit on a button. There are certainly applications where this makes sense and where this is a sensible way to go, as e.g. the use of such a tool (SCADE) for the programs in nuclear power plants. But does this also make sense for high volume microcontroller applications? There are a few stumbling blocks for this which may not be cleared away so easily in near future:

Autocoders still generate a big overhead in resource consumption (RAM/ROM/Runtime) which can not be optimized.

Autocoders usually generate highly cryptical code which thus fails to be subject to any further human inspection or test. This means that all testing that can be done on a low level has to be done by the respective design tool. Some tools support this (SCADE and SDT) where the demand for an "executable specification" is met and thus a sufficient testing is possible. But this also means that a decision for such a method is at the same time a decision for higher resource consumption and a decision to have no mixed design, i.e. all the code has to come out of the tool otherwise it fails to be testable.

For safety critical applications the tool and auto coder has to be certifiable. I.e. it has to be subject to checks by organizations such as the TÜV. A shoe which is too big for many modern tools, but a step which is necessary to lift a toy to the status of a real tool.

So far none of the well known design methods and associated tools is able to consider the timing aspect in microcontroller systems in a satisfying way. Interrupt service routines, and especially preemptive operating system tasks in microcontroller systems are not supported in these methods and tools. But they are one of the most important aspects of design for microcontrollers.

Some of these design methods and tools may be justified in appropriate environments, but the conclusion and our recommendation for high volume microcontroller systems is:

Use design methods and associated tools for rapid prototyping. This may be a big advantage to develop functions, but do not try to bring this tool output into series production for high volume microcontroller systems. You will hopelessly fail.

Use design methods to specify and describe complex functionality even for a microcontroller system. This may make the way of design easier, it uses a syntax which can be trained and understood by others, but be prepared to have a deliberate break in the tool chain where you set up a proper low level design and coding.

Do not attempt to use such a design tool to design your interrupts and operating system tasks. Although some tools recently offer the inclusion of a standard microcontroller operating system, this usually does not allow preemptive tasks and does not support interrupt service routines.
   

Thursday, 22 September 2011

Software Testing Certification


Certifications

          Several certification programs exist to support the Professional aspirations of Software Testers and Quality Assurance Specialists. No certification currently offered actually requires the applicant to demonstrate the ability to test software. No certification is based on a widely accepted body of knowledge. This has led some to declare that the testing field is not ready for certification. Certification itself cannot measure an individual's productivity, their skill, or practical knowledge, and cannot guarantee their competence, or professionalism as a tester.
 
          Doing a testing related certification might add some value to your profile. Certification could be important because of various reasons.

          There are various certification programs available for the software testing professional. Purpose of this page is to provide you a quick information and links about the certifications available to you.

Certification options available for Testing Professional can be divided in Four categories:

1. Certifications from International Software Testing Quality Board ( ISTQB ) :
ISTQB (International Software Testing Qualifications Board), responsible for conducting the ISTQB certification exam, was officially founded in Edinburgh in November 2002 and is represented by various national testing boards in different member countries. Presently (as of January 2009), 41 national testing boards are members of ISTQB and responsible for conducting the certification exam in their respective countries.

2.
Certifications from Quality Assurance Institute ( QAI ) :
The QAI Global Institute, formerly known as Quality Assurance Institute, was founded in 1980 in the United States of America. QAI's founding objective was and remains to provide leadership in improving quality, productivity, and effective solutions for process management in the information services profession.
The objective of QAI certifications are:
   a. To recognize individuals for a level of proficiency in the IT Quality Assurance, Project Management and Software Testing industries.
   b. To recognize the competencies needed to drive process maturity.

3. Certifications from International Institute for SoftwareTesting :
To achieve this goal, in 1999, IIST took the lead in education-based certifications by forming an Advisory Board of industry experts and practitioners to provide direction to the effort of developing education-based certifications. IIST offers two certification programs that are based on well-defined Bodies of Knowledge approved by IIST's Advisory Board.
The two certifications now offered by the International Institute for Software Testing are:
  a. Certified Software Test Professional (CSTP)
  b. Certified Test Manager (CTM)

4. Certifications from HP :
Mercury Company is considered one of the global leaders in business technology optimization software. Started about 13 years ago, they market integrated enterprise testing, production tuning and application management solutions and services. Mercury has divided its certification program in two categories:
The first Mercury certifications are the Interactive Certified Product Consultant (CPC) for people who wish to demonstrate consultant level skill with one or more Mercury Interactive software products. Available for:
   a. TestDirector
   b. WinRunner
   c. LoadRunner
   d. QuickTest Pro
   e. Topaz
The other Mercury certifications is the Interactive Certified Instructor (CI) for people who wish to train others on Mercury Interactive products and have access to official course materials

For more information, you can refer official website:
Click Here