Thursday, November 5, 2009
Tuesday, September 29, 2009
Company Quality
The company would emphasis mainly on four aspects for quality approach.
1. The elements like controls, job management, adequate processes, performance and integrity criteria and identification of records is consider as one.
2. The competence such as knowledge, skills, experience and qualifications is the next aspect.
3. The elements which are called as soft elements such as personnel integrity, confidence, organizational culture, motivation, team spirit and quality relation ship are considered as another aspect.
4. Infrastructure is also most important aspect.
If any of the aspects is deficient then the quality of the outputs will be at risk.
The approach of the quality management is not limited to any company but it can be applied to any business activity.
Some of the business field that can use the quality improvement process are,
• Design Work
• Administrative services
• Consulting
• Banking
• Insurance
• Computer software
• Retailing
• Transportation
This all the business should comprises a quality improvement process. The quality improvement process is generic in the sense and it can apply to any of theses activites. The quality improvement process establishes a behaviour patter which will in turn help to achieve quality.
This business is in turn supported by quality management practices which can include different number of business systems which are usually concern to the business unit which the activities are to carried out.
In manufacturing and construction activities, these business practices can be equated to the models for quality assurance defined by the International Standards contained in the ISO 9000 series and the specified Specifications for quality systems.
Still in this company quality system, the inspection of the work floor was not controlling. Actual work floor is the reason for the majority of quality problems. This led to the development of the quality assurance and total quality control concepts which are come into being recently.
Friday, September 25, 2009
Project Management Phases
Requirement Specification:
Requirement Specification is pivotal stage of the SDLC. During this stage the Project Manager will be in constant contact with the Customer to find out the requirements of the project in detail. Main tasks in this phase include Requirement Determination, Risk Analysis, Setting up Schedules, and deciding Deliverables. There are so many ways to communicate with the customer like Instant Messenger, Email, Phone, Voice Chat or personal meeting. At the end of the stage there will be requirement specifications which contain all the details of data the customer needs.
Requirement Analysis and Design:
In the next stage the main players where the Project manager and the System Analyst. They will review the customers requirements and analyze the requirements. later they start designing of the project. System architecture, Database design, Program specifications and Test Scenarios are determined. The detail design document which is prepared by them can be used by the developers and programmers to perform the coding.
Coding and Testing:
Programmers begin programming in this phase using the Detail Design Document. As project is progressing the programmer's progress is monitored by Project Manager and Project Leader respectively. Project Manager will be in constant contact with the customer and provides updates on the progress of the project via MS Project. The programmers follow various coding Standards decided by the company. Project Leader helps the programmers with the coding problems they are facing and guides them to the solutions. Testing is done by the QA Team simultaneously for the completed modules and approval is given to the modules once they have passed their initial tests before integration.
Deployment and Support:
This phase starts with Deployment of the Project. Initial hardware and software setup necessary to run the project is a very critical phase of the project. After project is completed Project Manager contacts the customer and prepares for the set-up. Software is handed over to the customer for acceptance testing only after complete internal testing. Support to the project is provided for a limited number of days during which any minor customer changes are required.
Monday, September 21, 2009
Project Management Methodology
The different phases are
Initiation/Setup
- Testing Strategy
- Automation Feasibility
- Manual & Automated components
- Tool selection for automation
- Testing Requirements & Sign off
- Unit Testing requirement
- Business Functional Requirements
- All interfaces
- Application security levels
- List of platforms for compatibility
- Critical transactions for performance testing
- Performance goals
- Globalization needs
- Effort estimates and price signoff
- Scheduling sign off
- Project Plan
- Project communication
- Workflow between Development & TS Testing teams
- Developer – Tester interaction process for UT (if required)
- Application Demo
- Domain Training
- Shadow transfer
- Study of user and operations manuals
- Globalization rules
- Test plan and Test cases
- Hardware and Software resource set up
- Ghost image plan
- Test bed creation
- Special requirements for globalization testing
- Architecture
- Automation Test Flow
- Identify reusable elements
- Create and test scripts
- In-line auto test setup
- Critical transactions
- Scripting
- Environment setup
- Iterative test run
- Analysis
- Execute test scripts or cases
- Log, track and report defects
- Improve and increase automation
- Monitor, analyze and feedback until Q levels achieved
- Allocate technical team to support warranty levels
- Optionally agree to a per/bug payment if errors are above
- Warranty levels
- Integrate warranty support team into maintenance team
- Live monitoring
- Warranty bug fixing
- Increase automation levels
- Improve test scripts and cases
Friday, September 18, 2009
Agile Project Management
1) It is not fully planned because there is no clear picture where it is going? So the fully fledged planning merely impossible during the starting.
2) Only a little amount of planning can be done and can iterate the planning on course.
3) During the execution many things will happen.
4) The risk management is a vital thing in this agile testing.
5) During the control phase it is needed to measure the various happening so that it can be guided.
6) We should measure all the readings so that we can keep an eye of where actual are and where can be moved to be?
7) When all the testing done and we knew it is done then we can close the testing.
So the agile project management can release the product at anytime without prior notice and that it can close the module at any time.
The Nature of Testing
Agile testers are called to be developers testing.
• their perspectives are special
• The developers rely on the testers for requirements which is known as test first development.
• Developers have to wait for some hours or may be day to get the requirements from testers.
• Agile products the builds are readily available.
• The testing cycles should complete quickly.
• It always changes the nature of plan to test and develop the test.
Agile Project Managers
Agile project managers are those who guide the projects.
• Negotiate each repeating contents (is it possible to fit one more user story).
• Tracking of the various iteration changes or progress.
• A 15 minute daily standup meeting should be facilitated to expose the progress and obstacles.
• Customer and team working together should be facilitated.
• Use old iteration completion to plan the next iteration(old data should be help full to plan the next iteration).
• Agile project manager needs “soft” skills like Negotiation, oral communication and influence to make the project management effective.
Project Initiation questions:
• How many of you write project charters?
• How many of you jump directly to the project schedule?
• Project charters are invaluable (need of the project, returns for the project, risks and people associated).
• These are the circumstances when the need arises
o One customer can’t represent the entire customer set/
o The customer can’t/won’t sit with the team
o You’re a pilot project trying out agile methods
Project Planning
Planning is not the holy way of the successful project
But without a plan you can’t be successful.
Plan only the needful thing don’t bother about the rest
• How many of you plan the whole project at the beginning?
• How many of you plan to replan?
• Agile projects practice schedule and planning iteration, as well as development iteration.
Steer Guide for an Agile Project
Daily 15-minute standup meetings
• Measure each iteration’s velocity
• Monitor fault feedback ratio
o Refactoring is updating the code, not the design
Tells you whether you need to change pairs or institute other peer review techniques.
Iteration close criteria
• “Did we accomplish all the user stories we thought we could do?”
• “Is it time to complete an iteration?”
• “Can the customer use what’s in this iteration?”
• Can perform a quick retrospective
Test Planning
• How to develop the test strategy documents, release criteria and what and who will test on which configurations.
• How many of you plan for X iterations of testing?
• How many of you plan for test automation?
• Agile projects practice testing iteration, as well as development iteration
Testing in an Agile Project
The automated tests are developed at the top level.
• Developers develop unit tests
• Develop automated tests at the integration level or component level.
• Refactor the tests
• Agile projects demand automated tests (as the main driver for tests)
Completing Testing for an Agile Project
• If the testing is conducted along with user then there is no need of acceptance test again.
• Some projects plan for a short iteration with exploratory testing to run through all the system tests and fix problems they missed.
• Measure code coverage to see if large holes
Define quality to someone
There are someone which should be care about the most.
There is multiple users with their own agenda.
Why should you care about what quality means for your project.
About Iterations
• Two-six weeks long (2-3 weeks is better)
– An iteration should be as long/short as you are willing to throw the iteration’s work away
– Team decides what they can accomplish within an iteration
– Team monitors its velocity: number user stories/iteration
• Customer and team develop user story cards (not a formal requirements document)
• Customer and team work together to understand each user story
• Testers and customer write acceptance tests
• All code is developed test-first (unit tests)
• Use some sort of peer review technique (pair programming, informal review)
Choose What to Measure
• EQF (Estimation Quality Factor)
• FFR (Fault Feedback Ratio)
• Cost to fix a defect
• When people work on the project vs. when they are assigned
Thursday, September 3, 2009
UNIX shell Scripting
Introduction to UNIX
UNIX is the most popular among various operating systems; it has so many advantages like.
Multitasking: UNIX is designed to do many things at the same time. in computing, multitasking is a method by which multiple tasks or processes share common processing resources. In a computer with a single CPU, only one task is said to be running at any point of time, meaning that the CPU is executing instructions for one task. Multitasking solves the problem by scheduling. Like spooling to the printer of one file and editing of other file. This is important for users as they don't need to wait for one application to end before starting second one.
Multi-user: Multi-user is a term that defines an operating system or application software that allows concurrent access by multiple users of a computer. Time-sharing systems are multi-user systems. The computer can take the commands of a number of users to run programs, access files, and print documents at the same time.
Stability: One of the design goals of the UNIX is Robustness and Stability. The UNIX is stable by its own nature. UNIX doesn’t need Periodic reboot to keep the system stable to maintain performance levels. There is no problem of memory leak ups so it won’t freeze up or slows down. It has continues up time more than a year or hundreds of days. Therefore it requires less administration and maintenance.
Performance: In networks and workstations UNIX system provides high level performance. At a time it can handle large numbers of users. it is possible to tune the UNIX systems in a better way to meet our performance needs ranging from embedded systems to Symmetric multiprocessing systems.
Compatibility - UNIX can be installed on different types of hardware machines, including main-frame computers, supercomputers and micro-computers. Linux- One of the popular variants of UNIX which will run on almost 25 processor architectures including Alphs/VAX, intel, PowerPC etc. UNIX also is compatible with windows for file sharing etc via smb(samba file system) and NFS(Network File system).
Security: UNIX is one of the most secure operating systems. “Firewalls” and flexible file access permission systems prevent access of unwanted visitors or viruses.
UNIX Architecture Diagram:
Shell is the ‘command interpreter’ for UNIX systems. It resides at the base of most of the user level UNIX programs. All the commands invoked by user are interpreted by shell and it loads the necessary programs into memory. Thus being a default command interpreter on UNIX makes shell a preferred choice to interact with programs and write glue code for test scripts.
Advantages of using Shell for test automation on UNIX
Following are some of the advantages of using Shell for test automation on UNIX,
Free: Most of the popular shells are free and open source no additional cost. No Additional software required: All the UNIX systems have a default shell already installed and configured (bash/ksh/csh). So there is no need to spend extra time to set up the shell. Shell is something very common to UNIX systems and a inhabitant always understands the problems pretty well and help resolving it.
Powerful: It provides plenty of programming constructs to develop scripts with simple or medium complexity.
Extensible: It is possible to extend the shell scripts by using additional useful commands/programs for extending the functionality. it is possible to write shell scripts using default editors available (vi, emacs etc) and can run and test it. No specialized tool is needed for the same.
Color high lighted report: Can even generate color-highlighted reports of test case execution, which is of great help.
Portability: Shell scripts are portable to other UNIX platforms as well as to Windows via Cygwin. Cygwin which is a shell on windows allows us to execute shell scripts on windows also.
Shell Commands
For testing it is important to do test setup, test procedure steps, validation of actual result with expected result, clean up steps to bring the application back to original state, scheduling a test, prepare test results log, and report the test results. Shell has many commands, which can help to achieve automation of these test activities.
Following are some useful Unix Shell commands for automation.
Verification and setup testing: When we want to test for installation/ uninstallation etc we can effectively use the file verification functionality of the shell.
-f to check whether a file exist
-r to check whether a file is readable
-w to check whether a file is writeable
-x to check whether a file is executable
We can also invoke external commands and check for their return code for success/failure of execution using predefined variable’$?’.
Also availability of common looping constructs like 'for' and 'while' make shell obvious choice to automate installation/ uninstallation testing, checking out whether commands/programs are executing successfully or not and functionality testing as well.
Most of the time we need to setup some environment variables, have some proper links (test environment) to set, this task can be automated using shell and is of great help.
Interactive Application testing using expect
Expect is a program that talks to other interactive programs based on a script. We need to mention the “expect” to what to expect from the program and what should be the response need to send. When writing an “expect” script, the output from the program is an input to the “expect script” and output of the “expect” script is input to the program. So now the “expect” script keep on expecting output from the program and keep on feeding input the interactive program, thus automating the interactive programs. Expect is generalized so that it can interact with any of the user level command/ program. Expect can also talk to several programs at the same time. In general expect is useful for running any program, which requires interaction between user and the program. All that is necessary is the interaction can be characterized using a program.
Executing shell scripts on Windows using Cygwin
Cygwin is a Linux like environment for windows. It consists of two parts - A dll, cygwin1.dll which acts as a Linux emulation layer providing Linux API functionality.
- A collection of tools, which provide Linux look and feel.
Cygwin is available under GPL (GNU Public License) and is free software. Cygwin gives us almost all standard unix shells (bash, ksh, csh etc) so you can run most of your scripts on windows as well. Thus cygwin provides lot of portability to shell scripts.
When not to use shell scripts for automated testing
It’s not a good idea to use shell scripts in following cases.
- Need to generate or manipulate graphics or GUI
- Need port or socket I/O
- Complex applications with type checking, function prototyping etc
- Need data structures like linked lists, trees etc.
If any of the above is true it’s a good idea to use more powerful languages like C, C++ or Perl/ Python for test automation
Reference:
http://en.wikipedia.org/wiki/UNIX
www.Onestoptesting.com
Tuesday, August 25, 2009
Project Management Templates
- Project Initiation Templates
- Project Planning Templates
- Project Execution Templates
- Project Closure Templates
- Risk Management Templates
- Change Management Templates
- Quality Management Templates
- Cost Management Templates
- Issue Management Templates
- Time Management Templates
- Procurement Management Templates
- Acceptance Management Templates
- Communications Management Template
There are some benefits by using various kinds of templates some of them are:
- Less time and effort spent
- Increased cost savings
- Reduced projects risks
- Fewer changes and less issues
- Improved deliverable quality
- More efficient monitoring
- Improved project tendering
- Closer control of delivery
- Better supplier management
- Higher performing staff
- Greater project success
Project Management methodology includes project management templates which help you start projects by defining nature of the business, undertaking a study of feasibility, completing the chart of the project, the project team recruiting and Project Office setting up. Some of the initiation templates are
- Business Case
- Feasibility Study
- Project Charter
- Job Description
- Project Office Checklist
- Phase Review Form
After the project definition and acquiring of necessary project team it is the time to enter the detailed project planning phase. To guide the project team through various Project Life Cycle there are a number of planning documents will help you to do the planning. Some of the planning documents available are:
- Project Plan
- Resource Plan
- Financial Plan
- Quality Plan
- Risk Plan
- Acceptance Plan
- Communications Plan
- Procurement Plan
- Tender Management Process
- Statement of Work
- Request for Information
- Request for Proposal
Execution is the phase where all the deliverable things are built , calculated and presented to the customer for the acceptance. For each deliverable there should be a management method or model is available to monitor and control the deliverable being output by the project. These processes include time management, cost management, quality management, change requests, risk management, issues, suppliers, customers and communication. These are some execution templates are those will help to execute the project successfully.
- Time Management Process
- Timesheet Form
- Timesheet Register
- Cost Management Process
- Expense Form
- Expense Register
- Quality Management Process
- Quality Review Form
- Quality Register
- Change Management Process
- Change Request Form
- Change Register
- Risk Management Process
- Risk Form
- Expense Register
- Risk Register
- Issue Management Process
- Issue Form
- Issue Register
- Procurement Management Process
Project Closure involves Customer should be released with the final deliverable, projects documentation should be handed over to the business, supplier contracts should be terminated, project resources should be released and stakeholders should be informed with project closure. The only remaining step is to do a post implementation review to identify the level of project success and if need add some notes to lessons learn so that it should be useful for future projects.Project management methodology having some templates to help to close the project in more effective way here are some.
- Project Closure Report
- Post Implementation Review
Saturday, August 22, 2009
Bugs which found during software testing
2. Bug Reporting : The report with detailed document to show the malfunctioning or bug found out in the software to Developer.
3. Bug Tracking: To track the bugs and review it.
4. Bug Tracking System: The system which is designed to track bug and periodic review of the bug through change request.
Tips to track the bug.
1. The good tester always uses minimal steps to reproduce the bug. If the steps are minimal then it will be easy to reproduce the same by the developer.
2. IT should be like that the bug should be closed by the person who found the bug first. Anybody can solve the bug but it is only that person can make sure whether what he found out is being fixed.
3. There are methods like fixed, won’t fix, postponed, not reproducible, duplicate or by design like so many paradigms.
4. When the reproducing steps missing then programmers often make this excuse that the bug report is missing and it is not reproducible.
5. There should be a checkpoint for every version of the software that is releasing to the tester. So that they can avoid the confusion during the testing whether a bug is fixed or not fixed. If a checkpoint version control will help to track the changes that are made in the software.
6. The testers should use a bug database that the programmers can easily track. Programmers should not accept any bug other than through bug database. If a tester send mail describing a bug to the programmer is should be send back to them to put it into bug data base saying that the email can’t be tracked.
7. Tester should not mail the programmer saying about the bug instead the tester should put the bug in bug database. Database will contact the corresponding programmer through emails.
8. Programmer should make sure that his colleagues are also familiar with bug database.
9. Manager have to make assure that the bug database that he had installed is fully functioning it is using correctly.
10. Bug database should be standard. The fields of the database should not be modified again.
Priority in Bug tracking
This one is to prioritize the bugs’ importance so that they can be fixed.
The priorities are
P1 - Fix in next build
P2 - Fix as soon as possible
P3 - Fix before next release (default).
P4 - Fix if time allows
P5 - Unlikely to be fixed
Bug tracking severity
This is to mention the effect of the bug in software.
Blocker - Blocks development and/or testing work
Critical - crashes, loss of data, severe memory leak
Major - major loss of function
Normal - Minor loss of function. Unfriendly behavior that is hindering, but workable, for the user or service
Minor - Minor loss of function. Unfriendly behavior that is merely annoying to the user
Trivial - cosmetic problem like misspelled words or misaligned text
Enhancement - Request for enhancement
Bug Life Cycle
As the bug is found it goes through various stages these are the stages.
NEW: added new to assignment, analysis, fixing and reassignment.
ASSIGNED: The programmer accepts this work that they have to work on this one. Which should be analyzed, assigned and fixed.
REOPENED: Once resolved got reopened because of some new problem or for enhancement. It is new assigned and should be analyzed and should be fixed.
RESOLVED: those are which is finished the work by analyzing and fixing.
VERIFIED Quality assurance should approve this fix and put as verified.
CLOSED This one is the finished by reopening it can be reopened in some cases.
Resolutions
Fixed - The Bug is fixed
Invalid - The problem described is not a bug
WontFix - The bug will never be fixed
Later - The bug will not fixed in current version
Remind - The bug probably wont be fixed in current version but it should be reminded
Duplicate - This is a duplicate of an earlier reported problem.
Worksforme – Not reproducible.
Moved – Bug is shifted to other database.
Effective bug reporting
Keep bug reporting simple:
Only necessary data should be provided during the bug reporting.
Describe how to reproduce bugs:
Exact steps should be given to reproduce the bug
Record how bugs are detected:
Give all the avaialbe data that how the bug is appeared.
Write bug reports immediately:
Bug report should be immediately filled. If you wait that would be very hard to reproduce the steps.
Writing a one-line report summary:
A summary should be there for that for recollecting.
Wednesday, August 19, 2009
Software security test cases
There is an important method of testing where the use of the customer will be duplicated or it is testing us a customer itself. In the development cycle during the quality assurance phase a test plan is formulated by documenting the test cases for these tests. This is to ensure that the common needs of the customer are not missed during development phase. It will also make sure that the above needs will never miss the testing phase also. Quality assurance teams should understand the security issue or defects are not only the responsibility of the software developer or the tester alone. That is there responsibility also.
QA Engineers never understand the inner scope of specific software but they will go deep into the testing community to see what level is the penetration to the code by software testers. QA will have the advantage of getting the internal documents later they can aid a test engineer to test the application.
It is all an important thing to document all the attack that an attacker can perform against the application and it should be incorporated into standard test plan.
Preparation
Before preparing the test cases it is important to know the scope of testing. I have divided the scope into three sections.
1) Identifying the inputs
a. The needed Files
b. The environmental variables
c. Various configuration parameters.
d. External configuration files
e. The “Regedit” configuration
f. Any database
g. Hidden commands
h. Any other input which is required should be asked to the developing team.
All the possible ways from a input come should be identified. Basic requirement testing should be done along with security related tests. There should be some way to test Buffer overflow and format sting errors. If a huge amount of data is loaded into input of the application, application might produces errors, it may crashes or it may acts awkward. These are the sign of a buffer overflow. Application developer should able to find out the reason for the crash and it should be resolved. If a poorly formatted input will crash the application which will cause the product stability and security. There should be a various input for an application. Developer should inform the testing team how to use an input an when so that it should be checked for security of the application.
Installation
• Used by an installer
• Instruction which should during installations
• Using the necessary exe file or bat file.
Deployment
Deployment should be made in two domains or environments
a) Trusted Environment
b) Untrusted or third party environment.
During the installation and deployment time it should be taken care about the various files and registry setting that is needed for the installation and executions of the applications. Even temporary files which exist in the temp folder for not more than one second will allow access of the sensitive data on a user’s machine. There is a concept of user mask in Unix machines who take care about the file permission. During deployment the user mask is defined like that the system or application is fully opened to attack. So that every security breach can be find out. It should be ensured that there should be a permission for files, all the newly created databases and registry key during the deployment.
Different types of testing should be conducted to find the security breaches.
1) Functional testing.
a. The permission in a main file which should be as restrictive as possible. If the permission are loosely defined then it is a security issue with severity level 1.
b. The sensitive data should be encrypted by proper algorithm this one is also severity level 1 issue.
c. If a database is allowing permission which contain user data it is also severity level 1 issue
2) Logical tests
a. Authentication failure:- Not allowing a user to login properly which contains severity 2 level
b. If the login data is mismatched then not providing the necessary instructions is severity 3 level.
c. Any confirmation which is providing should contain any sensitive data.
d. Resetting the temporary password a prolong time.
Monday, August 10, 2009
Keyword driven Testing
Base Requirements
There are several requirements that consider being "base requirements" for success with keyword driven testing. These include:
Test development and automation should be mutually exclusive – It is very important to split test development from test automation. The two disciplines require very different skills. Basically, testers are not and should not be programmers. Testers must be skillful at defining test cases independent of the essential technology to implement them. Individuals who are skilled technically, the automation engineers, will implement the action words to test them as per test cases.
Test cases must have a clear and differentiated scope – It is important that test cases have a clearly differentiate scope and that they should not deviate from the scope.
The tests must be written at the right level of abstraction – Tests must be written at the right level of concept such as the higher business level, lower user interface level, or both. It is also important that test tools should provide this level of flexibility.
The Framework
The implementation of keyword driven testing methodology is framework dependent. This framework requires the development of data tables and keywords, which are independent of the test automation tool used to execute them. It also needs the test script code which "drives" the application-under-test and the data.
In a keyword-driven test, the functionality of the application-under-test is documented in a table as step-by-step instructions for each test.
Methodology
The keyword-driven testing methodology divides test design into two stages:
Planning Stage
Analyzing the application and determining, which objects and operations of business processes that need to be tested.
Deciding which keywords is having provided additional functionality, to achieve business-level clarity, and/or to maximize efficiency and maintainability.
Implementation Stage
There should be a unique reference to identify the objects which is some time known as an object repository. it also ensure that the these references have clear names that follow any predetermined naming conventions.
Developing and documenting business-level keywords in function libraries. Creating function libraries involves developing customized functions for the application which needed to be tested.
Anyway this methodology requires more planning and a longer initial time-investment. This methodology makes the test creation and test maintenance stages more efficient and the individual tests have more readability and is easier to modify.
Vision for Automation
There must be a clear vision for the automation.
Having a good methodology – It is important to have a good integrated methodology for testing and automation. It is also important to use the best technology that supports the methodology, enhances flexibility, minimizes technical efforts, and maximizes maintainability.
Have the right tools – Any tool that is in use should be specifically intended for keyword based testing. It should be flexible enough to permit for the right mix of high and low level testing. It should allow the testers to build keyword tests without difficulty and in no time. It should not be overly complicated for automation engineers.
Three "success factors for automation" – There are three critical success factors for automation that the vision should account for. They are:
Test Design
Test design is more important than the automation technology. Design is the most underestimated part of testing. It is my belief that test design is the single most important factor for automation success.
Automation Architecture
Scope, assumptions, risks
Methods, best practices
Tools, technologies, architecture
Stake holders, including roles and processes for input and approvals
The "right" team must also be assembled.
Test management who is responsible for managing the test process.
Test development that is responsible for production of tests. Test development should include test leads, test developers, end users, subject matter experts, and business analysts.
Automation engineering, those are responsible for creating the automation scheme for automatic execution. Members of this team include a lead engineer as well as one or more automation support engineers.
Support functions, providing methods, techniques, know how, training, tools, and environments.
For the team there should be a clear division of tasks and responsibilities as well as well defined processes for decision making and communication.
How to Measure Success
With any major undertaking, it is important to define and measure "success". There are two important areas of measurement for success – progress and quality.
Progress
You should measure test development against the test development plan. If goals are not reached, act quickly to find the problems. Is the subject matter clear? Are stake holders providing enough input? Is it clear what to test? Is the team right?
You should measure automation and look at things such as implemented keywords and interface definitions.
You should measure test execution looking at things such as how many modules are executed and how many executed correctly?
Quality
Some of the key quality metrics include:
Coverage of system and requirements
Assessments by peers, test leads, and by stake holders (recommended)
Effectiveness
Are you finding bugs?
Are you missing bugs?
Can you find known bugs (or seeded bugs)?
After the system is released, what bugs still come up? You should consider calculating the "Defect Detection Percentage"
Dig for your bug base for additional insights
How to Test Software when requirements are changing
RAD helps software developers to make the first versions very quickly. This will cause many headaches to testers. With every change there is a possibility to create new defects. The only way to find new defects is to perform a regression test that repeats a series of previous test cases whose results will compare with previous results to find the differences.
In rapid development whether it is possible to test?
Truly say no. This one is a tricky question because even in a stable environment it is not possible to test clean. So in rapid change environment the question could be asked: "Is it possible to test effectively in the rapidly changing?" Can we expect to make the best use of people and other resources to test the software? Can we expect to find the number of defects?
In RAD the process control is essential to find defects with any degree of effectiveness. Since the standard is not to have a repeatable process for most of what we do in building software, many people in a test environment in RAD try a few test cases here and there and look for defects. Which is very difficult to find out the defects?
What strategies can be used?
It takes some time to study which method will works in each which environment. The case is that there should be a unique strategy for a unique system or environment. But there are some strategies that can be used for testing during the rapid evolution:
First, accept the fact that there is no time of luxury to conduct a six-week test for software which changes daily. That means there is a need to define a testing process that can be performed rapidly and efficiently.
Second, perform a risk assessment. Knowing the level of risk is crucial, because the need to prioritize the efforts to adapt to the test in a short time window. Higher the risk, more testing should be done.
Automate your tests. Capture / playback tools help to make the tests repeatable and unattended in a session. Good tools require a significant investment in software and training, but it beats working 24 hours a day. There are things to consider before automation:
There should be a basic version of the software for comparison with future tests.
Requirements, test cases and test scenarios should be defined. The tool can record and playback, depending on what the user performed actions.
The data is a key element. Keeping the data is the main element is testing.
It takes time and lots of money to integrate the tool in your organization. People need to be trained to use the tool. In addition, people need to be sold on long-term benefits in relation to short-term work needed to install test scripts and test.
Conclusion
Testing during the rapid evolution is possible, but it requires a rapid response, smart work, and traceability. Organizations that is not willing to consider new technologies such as automated testing tools will not be able to test effectively during rapid change. It is like building a house with hand tools - which it can be done, eventually.
Testing during the rapid development also requires a new mindset and organizational processes. Tools are not the answer. There must be a process that can be executed quickly and makes the best use of people and time. It is to the optimal mix of tools, processes, and people that make the challenge done.
Friday, August 7, 2009
Some of the Risks associated with software product
Few generic risks related with the size of the product are:
o Estimated size of the product and assurance in estimated size?
o Estimated size of product?
o Size of the database created or used by the product?
o Number of users of the product?
o Number of projected changes to the requirements for the product?
Risk will be high, when a large difference is observed between expected results and the results from the past experience. It is good practice to compare expected information with previous experience for carrying out the analysis of risk.
2) Business Impact Risks:
Few generic risks associated with the business impact are:
o Effect of the software product on income of the company?
o Reasonability for delivery dates?
o Number of clients expected to use the product.
o Stability in the needs of the customer’s comparative to the product?
o Number of other products / systems with which the concerned product is expected to be interoperable?
o Amount and quality of product which must be produced and delivered to the customer?
o Costs associated with overdue delivery or a faulty product?
3) Customer-Related Risks:
Special customers have special needs. Every customer is independent. Some customers readily accept what is delivered to them. Some others complain about the value of the product. In some other cases, customers may have very good relationship with the product and the producer and some other customers may not have. A bad customer represents a major threat to the project plan and a considerable risk for the project manager.
Some checklist for the customer
o Have you engaged with the customer in the past?
o Does the customer have a good idea of his requirement?
o Will the customer agree to spend time for requirement discussions?
o Is the customer enthusiastic to join in reviews?
o Is the customer technically familiar in the product area?
o Does the customer realize the software engineering process?
4) Process Related Risks:
Risks are soaring for software product. if the software engineering method is ill defined or if analysis, design and testing are not conducted in a intended fashion.
o Whether the organization has documented software development process?
o Whether the team members are following the documented software development process?
o Whether the third party programmers are also following the defined software development process.
o Whether keeping a track on the performance of third party programmers?
o Whether the development teams and testing teams are conducting official technical reviews at regular intervals?
o Whether results of every official technical review are properly documented?
o Whether configuration management is used to maintain consistency among system?
o Is there any mechanism for controlling changes to customer necessities which have impact on the software product?
5) Technology Related Risks:
o Whether the technology used is new to the organization?
o Whether the software has proper interface with new hardware configurations?
o Whether the software has proper interface with the database system whose function and performance have not been proven in the concerned application area?
o Whether any specialized user interfaces have been demanded by product requirements?
o Do requirements demand the use of any new analysis, design or testing methods?
o Do requirements put excessive performance constraints on the product?
6) Technical Risks:
o Whether specific methods used for software analysis?
o Whether specific conventions for code documentation defined and used?
o Whether any specific methods used for test case design?
o Whether software tools used to support planning and tracking activities?
o Whether configuration management tools used to control and track change activity throughout the software development process?
o Whether tools used to generate software prototypes?
o Whether tools used to support the testing process?
o Whether tools used to support the production and management of documentation?
o Whether quality metrics collected for all software projects?
o Whether productivity metrics collected for all software projects?
7) Environmental Risks:
o Whether a software project and process management tool available in the organization?
o Whether tools for analysis and design are available in the organization?
o Whether analysis and design tools deliver methods are appropriate for the product to be built?
o Whether compilers or code generators are available for the product to be built?
o Whether testing tools are available for the product to be built?
o Whether software configuration management tools are available in the organization?
o Whether the environment needs a database or repository?
o Whether all software tools are properly incorporated among all?
o Whether all members of the project team have received training on every tool?
8) Team Associated Risks:
o Whether best people are available in enough numbers for the project?
o Do the people have the right mixture of skills?
o Whether all team members are committed for the entire duration of the project?
Thursday, August 6, 2009
Steps involved in Testing
esting principally consists of two key activities they are 1) Organising Sandboxes 2) Developing Test Cases.
1): Organizing Sandboxes: Database testing involves the need of a copy of databases which are called sandboxes.
These sandboxes are of following three types
a) Functionality Sandbox: In this the new functionality of database will be checked and reflected in the existing functionality. Then the tested sandbox will pass to the next stage, which is an integrated sandbox.
b) Integrated Sandbox: In this stage all the sandboxes will be got integrated and then test the system.
c) QA sandbox: After the system is tested, sandboxes are sent for acceptance testing. This will ensure the quality of the database.
2) Development of test cases: The step by step process for the development of test cases is as under:
The first step is setting up of the test cases: Set up the database to a known state.
The sources of test data are
1) External test data.
2) Test scripts.
3) Test data with known values.
4) Real world data.
The second step is running the test cases: The test cases are then run. The running of the database test cases is analogous to usual development testing.
Traditional Approach of Test Case Execution:
Test cases are executed on the client side. The results which produced are then validated against expected values.
Advantages of Traditional Approach: It is simple and no programming skill is required. It not only addresses the functionality of stored procedures, rules, triggers and data integrity but also the functionality of application as a whole.
Disadvantages of Traditional Approach:
1) Sometimes the results after test case execution do no necessarily indicate that the data itself is properly written to a record in the table.
2) When wrong results are sent back after the execution of test cases, it doesn't necessarily mean that the error is a database error.
3) A crucial danger with database testing and with regression testing in specific is coupling between tests. If we put the database in to a known state, run several tests against that known states, before setting it, then those tests are potentially coupled to one another.
Advanced Approach of Test Case Execution:
First of all we need to do a schematic preparation for Testing, which involves:
Generate a list of stored procedures, triggers, defaults, rules and so on. This will help us to have a good handle on the scope of testing required for database testing.
Thereafter we can follow the following points:
1. Generate a scheme for test case table. Analyzing the scheme will help us determine the following:
- Can any field be marked as NULL?
- Boundary values.
- Constraints.
- Connectivity of various variables?
- Is there any look Up table available to check?
- What are user defined data types?
- What are primary key and foreign key relationships among tables?
- What is the primary function of each function? Does it read data and produce outputs, write data or both?
- What are the accepted parameters?
- What are the return values?
- When is the stored procedure called and by whom?
- When is a trigger fired?
Third step is to Check the results: Actual database test results and expected database test results are compared.
Wednesday, August 5, 2009
Statistical control
Traditional statistical process controls in manufacturing operations usually proceed by randomly sampling and testing a fraction of the output. Variances of critical tolerances are continuously tracked, and manufacturing processes are corrected before bad parts can be produced.
Statistical Process Control (SPC) is an effective method of monitoring a method through the use of organizes charts. Organize charts allow the use of objective criteria for distinguishing background distinction from actions of impact based on statistical techniques. Much of its control lies in the capability to monitor both process center and its variation about that center. By collecting data from samples at various points within the process, variations in the process that may affect the quality of the end product or service can be detected and corrected, thus reducing waste as well as the likelihood that problems will be passed on to the customer. With its emphasis on early detection and prevention of problems, SPC has a distinct advantage over quality methods, such as inspection, that apply resources to detecting and correcting problems in the end product or service.
In addition to reducing waste, SPC can lead to a reduction in the time required to produce the product or service from end to end. This is partially due to a diminished likelihood that the final product will have to be reworked, but it may also result from using SPC data to identify bottlenecks, wait times, and other sources of delays within the process. Process cycle time reductions coupled with improvements in yield have made SPC a precious tool from both a cost reduction and a customer satisfaction standpoint.
History
Statistical Process Control was pioneered by Walter A. Shewhart in the early 1920s. The concept of quality control in manufacturing was first advanced by Walter Shewhart The first to apply the newly discovered statistical methods to the problem of quality control was Walter A. Shewhart of the Bell Telephone Laboratories. He issued a memorandum on May 16, 1924 that featured a sketch of a modern control chart. W. Edwards Deming later introduced SPC methods in the United States during World War II, thereby successfully improving quality in the manufacture of weapons and other strategically important products. Deming was also instrumental in introducing SPC methods to Japanese industry after the war had ended.
Shewhart formed the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Dr. Shewhart drew from pure mathematical statistical theories, he understood that data from physical processes seldom produces a "normal distribution curve". He revealed that observed variation in manufacturing data did not always behave the same way as data in nature. Dr. Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process, while others display uncontrolled variation that is not present in the process causal system at all times .
In 1989, the Software Engineering Institute introduced the idea that SPC can be applied to non-manufacturing processes, such as software engineering processes, in the Capability Maturity Model (CMM). This idea exists today within the Level 4 and Level 5 practices of the Capability Maturity Model Integrated (CMMI).
General
In mass-manufacturing, the worth of the completed article was usually achieved through post-manufacturing inspection of the product; accepting or rejecting each piece based on how well it met its design specifications. In contrast, Statistical Process Control uses statistical tools to observe the performance of the production process in order to predict significant deviations that may later result in rejected product.
Two kinds of variation occur in all manufacturing processes. The first is known as natural or common cause variation and may be variation in temperature, difference in raw materials, voltage. This variation is minute, the observed values generally being quite close to the average value. The second kind of variation is known as special cause variation, and happens less frequently than the first.
How to Use SPC
Initially, one starts with a quantity of data from a manufacturing process with a definite metric, i.e. mass, length, surface energy of a widget. There should be upper threshold and lower threshold. The Upper threshold Limits of the process would be set to average plus three units and the Lower Control Limit would be set to average minus three units. The action taken depends on gauge and where each run lands on the SPC chart in order to control but not tamper with the process.
After some times other process-monitoring tools have been developed, including:
Cumulative Sum (CUSUM) charts: the ordinate of each plotted point represents the algebraic sum of the previous ordinate and the most recent deviations from the target.
Exponentially Weighted Moving Average (EWMA) charts: each chart point represents the weighted average of current and all previous subgroup values, giving more weight to recent process history and decreasing weights for older data.
Failure testing
Failure testing is an important part of the manufacturing process, no matter what are manufacturing. Failure testing is a way to ensure that the product and service that will not fail under different circumstances and situations of stress, weather, temperature, and so on and so forth. Continuous failure testing, even after a product is developed, will help you ensure that your manufacturing processes are on optimum possible way and that which continually improving the products and the services.
When a product fails, then examine those failures quickly so that the problem corrected. When failure testing is performing on a component that has failed, or just to test for a known failure, then it is very necessary to correlate the observations of a number of unusual aspects of the module: the appearance of the component or product, its composition, and its strength. Also keep in mind the design of the product, the operating conditions, the service environment, and the manufacturing record.
Failure testing measures enclose many of the same components and practices as failure examination. Failure analysis occurs after the fact, but failure testing strives to occur before the fact so that failure can hopefully be avoided by continually testing products and components so that they can be improved before they fail. It would be beneficial to you and your customers if you engage in failure testing on a regular basis, so that you can prevent any future problems. There are a number of different ways that you can go through failure testing; the best approaches to failure testing will be specific to your particular industry. A good overall approach to manufacturing processes, such as lean manufacturing or Six Sigma, will include failure testing as a part of its approach to industrialized process management.
Ensure the test to be conduct so that, to take into consideration material composition, the macro structure and the micro structure of the particular component, the distribution of hardness, the mechanical properties of the component, how well the element or the product resists deterioration, what happens when the product is put into prolonged contact with saline, the consequence of moisture on the product or the component, different environmental exposures, and what happens when the product is confronted with abrasives. It is advised to look carefully at fatigue, fracture testing, the flexural, yield, and ultimate strength of your product or component, the impact strength and corrosion resistance of every component of your product, and more.
Failure testing will help to ensure the quality of the products and the services. Regular failure testing will be a preventative measure than a necessarily corrective measure that occurs after the potentially disastrous fact. Quality within a business is usually defined in terms of the relation between the customer, the process or product, and the business.
Stress is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. Stress testing a subset of load testing
Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. Stress testing may have a more specific meaning in certain industries.
I'm sure devious testers can enhance this list with their favorite ways of breaking systems.
Stress testing practices employed prior to the start of the crisis in four broad areas: (i) use of stress testing and integration in risk governance; (ii) stress testing methodologies; (iii) scenario selection; and (iv) stress testing of specific risks and products.
Tuesday, August 4, 2009
Quality assurance activitiy
One of the commonly used prototypes for QA management is the PDCA (Plan-Do-Check-Act) approach.
Plan–Do–Check–Act Cycle
The concept of the PDCA Cycle was originally developed by Walter Shewhart, the pioneering statistician who developed statistical process control in the Bell Laboratories in the
Description
The plan–do–check–act cycle (Figure 1) is a four-step model for carrying out change. It is just as a circle that has no end, the PDCA cycle should be repeated again and again for continuous improvement. Use the PDCA Cycle to coordinate your continuous improvement efforts. It both emphasizes and demonstrates that improvement programs must start with careful planning, must result in effective action, and must move on again to careful planning in a continuous cycle.
Figure 1: Plan-do-check-act cycle
When Plan-Do-Check-Act come into play?
It is a model of continuous improvement.
When the starting of a new improvement project.
When developing a new or improved design of a process, product or service.
When defining a repetitive work process.
When planning data collection and analysis in order to verify and prioritize problems or root causes.
When implementing any change.
Plan-Do-Check-Act Procedure
Plan. Recognize an opportunity and plan a change. Plan to improve your operations first by finding out what things are going wrong (that is identify the problems faced), and come up with ideas for solving these problems.
Do. Test the change. Carry out a small-scale study. Do changes designed to solve the problems on a small or experimental scale first. This minimises disruption to routine activity while testing whether the changes will work or not.
Study. Review the test, analyze the results and identify what you’ve learned. Check whether the small scale or experimental changes are achieving the desired result or not. Also, continuously Check nominated key activities (regardless of any experimentation going on) to ensure that you know what the quality of the output is at all times to identify any new problems when they crop up.
Act. Take action based on what you learned in the study step: If the change did not work, go through the cycle again with a different plan. Act to implement changes on a larger scale if the experiment is successful. This means making the changes a routine part of your activity.
. The diagram below lists the tools and techniques which can be used to complete each stage of the PDCA Cycle.
Quality Management Components
Quality Management Components
Quality control is a process engaged to make sure a certain level of quality in a product or service. It consists whatever actions a industry deems crucial to provide for the control and verification of certain characteristics of a product or service. The basic goal of quality control is to make sure that the products provided meet specific requirements.
Quality control involves the examination of a product for certain minimum levels of quality. The goal of a quality control team is to identify products that do not meet a company's specified standards of quality. If a problem is recognized, the job of a quality control team is like that the quality to prescribe to stop production temporarily. It depends on the particular product, type of problem identified.
Generally, it is not the job of a quality control team to correct quality issues. Other individuals, who are involved, discover the cause of quality issues and fix them. Once these kinds of problems are overcome, the product continues production or implementation as planned.
Quality control can cover not only products, services, and processes, but also people. Employees are an important part of the company. If a company has employees that don't have adequate skills or training, have trouble understanding directions, or are misinformed, quality may be severely diminished. When quality control is considered about human beings, it concerns correctable issues.
Often, quality control is confused with quality assurance. Even if the two are very similar, there are some basic differences. Quality control is concerned with the product and quality assurance is process oriented.
Even with such a clear-cut difference defined, identifying the differences among the two is hard. Basically, quality control involves evaluating a product. Quality assurance is designed to make sure processes are sufficient to meet objectives. Quality assurance ensures a product is manufactured, implemented, created, or produced in the right way. Quality control evaluates whether the end result is satisfactory.
Quality assurance (QA) is the activity of providing evidence needed to establish quality in work. The activities that require good quality are being performed effectively. All the systematic actions, which provide enough confidence, that a product will satisfy the given requirements for quality.
Friday, July 24, 2009
Requirements Testing
1. Abstract
Testing the software is an integral part of building a system. But if the software is based on inaccurate requirements, then with well written code software will be unsatisfactory.
2. The Quality Gateway:
As the requirement got singled out then testing can start. The aim is to catch requirements-related defects as early as identified. It would prevent from incorporating incorrect requirements in the design and implementation. To pass through the quality gateway, requirement must pass a number of tests. These tests are concerned about ensuring that the requirements are accurate. This will make sure that the requirement will not cause any problem in design and implementation in later stages.
3. Make the Requirement Measurable
There would be a quality measure for each requirement. Each requirement should have a quality measure that makes it possible to divide all solutions of the requirement into two classes: those which that fit into the requirement and those which that they do not fit into the requirement. In other words quality measure for a requirement means that any solution that meets the measure will be acceptable. Of course it is also true to say that any solution that does not meet the measure will not be acceptable. The quality measures are used for testing the new system against the requirements.
4. Quantifiable Requirements
Quantifiable requirement are those which the system respond quickly to customer enquiries. First thing to find a property of this requirement that provides us with a scale for measurement within the context.
5. Non-quantifiable Requirements
An attempt to define the quality measure for a requirement helps to rationalize fuzzy requirements. Something like “the system must provide good value” is an example of a requirement that everyone would agree with, but each person has his own meaning. By investigating the scale that must be used to measure “good value” identify the diverse meanings.
6. Coherency and Consistency
The requirements engineer has the intention that each requirement to be understood in the same way by every person who reads it. This subjectivity means that many systems are built to satisfy the wrong interpretation of the requirement. The obvious solution to this problem is to specify the requirement in such that it is understood in only one way.
Thursday, July 23, 2009
Black Box Testing
Black box testing takes an external perspective of the test object to derive test cases. These test cases can be functional or non-functional. Usually functional test cases are called Black box testing. The test designer selects positive and negative cases and determines the adequate output. There is no information of the test object's internal formation.
1. Testing Strategies/Technique
* Black box testing should make use of randomly generated inputs, to eliminate any guess work by the tester as to the methods of the function.
* Data outside of the specified input range should be tested to check the robustness of the system.
* Boundary cases should be tested (top and bottom of specified range) to make sure the highest and lowest allowable inputs produce proper output
* The number zero should be tested when numerical data is to be input
* Stress testing should be performed, especially with real time systems
* Crash testing should be performed to see what it takes to bring the system down
* Test monitoring tools to track which tests have been performed and the outputs of these tests, to avoid repetition and to help in the software maintenance
* Other functional testing techniques include: Transaction testing, Syntax testing, Domain testing, Logic testing, and State testing.
* Finite state machine models can be used as a guide to design functional tests
2. Black Box Testing Strategy:
Black Box Testing is not a type of testing; it is a testing strategy. As the name "black box" indicates, no knowledge of internal logic or code structure is required. The types of testing under this strategy are totally based/focused on the testing for requirements and functionality of the work product/software application. Black box testing is sometimes also named as "Opaque Testing", "Functional/Behavioral Testing" and "Closed Box Testing".
The base of the Black box testing strategy lies in the selection of appropriate data as per functionality and testing against the functional specifications. It is becoming common to handle the Testing work to a third party as the developer of the system knows about the internal logic and coding of the system.
In order to implement Black Box Testing Strategy, the tester is needed to be thorough with the requirement specifications of the system and as a user, should know, how the system should behave in response to the particular action.
Various testing types that fall under the Black Box Testing strategy are: functional testing, stress testing, recovery testing, volume testing, User Acceptance Testing (also known as UAT), system testing, Sanity or Smoke testing, load testing, Usability testing, Exploratory testing, ad-hoc testing, alpha testing, beta testing etc.
These testing types are again divided in two groups
a) Testing in which user plays a role of tester and
b) User is not required.
3. Advantages
More effective on larger units of code.
Tester needs no knowledge of implementation.
Tester and programmer are independent of each other.
Tests are done from a user's point of view.
Will help to expose any ambiguities in the specifications
Test cases can be designed as soon as the specifications are complete.
4. Disadvantages
Only a small number of possible inputs can actually be tested.
Without clear and concise specifications, test cases are hard to design.
There may be redundant test inputs if the tester is not informed of test cases the programmer has already tried.
May leave many program, paths untested.
Cannot be directed toward specific segments of code, which may be very complex.
Wednesday, July 22, 2009
Prototype Model
This model reflects increase the flexibility of the development process by allowing the client to interact and experiment with a working product model. The developmental process only continues once the client is satisfied with the functioning of the prototype. That time the developer come to know about clients real need.
Software prototyping
Software prototyping, is the creation of Beta versions, i.e., incomplete versions of the software program being developed.A prototype only contains a small subset of the features of the actual program, and the implementation is also not called robust.The purpose of a prototype is to allow users of the software to evaluate the design of the actual product by actually trying them out.
Benefits:
The software developer can obtain early feedback from the users. The client can compare if the software matches the software specification, according to which the software program is built. It will help to determine the developer to kept a goal of his own for the project.
Process of prototyping involves the following steps
1)Identify basic requirements
Determine basic requirements including the input and output information desired. Details, such as security, can typically be ignored.
2)Develop Initial Prototype
The initial prototype is developed that includes only user interfaces.
3) Review
The customers, including end-users, examine the prototype and provide feedback on additions or changes.
4) Revise and Enhancing the Prototype
Using the feedback both the specifications and the prototype can be improved.
Classifications
it can be classified as throwaway prototyping and evolutionary prototyping
Throwaway or Rapid Prototyping refers to the creation of a model that will eventually be discarded rather than becoming part of the finally delivered software.
The main goal when using Evolutionary Prototyping is to build a very robust prototype in a structured manner and constantly refine it.
Advantages
Reduced time and costs:
Improved and increased user involvement:
Disadvantages
Insufficient analysis:
User confusion of prototype and finished system:
Developer attachment to prototype:
Excessive development time of the prototype:
Expense of implementing prototyping:
Methods
Dynamic Systems Development Method (DSDM) is a framework for delivering business solutions that relies heavily upon prototyping as a core technique, and is itself ISO 9001 approved.
The four categories of prototypes as recommended by DSDM are:
- Business prototypes.
- Usability prototypes.
- Performance and capacity prototypes.
- Capability/technique prototypes.
Tools
Visual basic etc.
Thursday, February 12, 2009
Two communication satellite collides
Two big communications satellites collided in the first-ever crash of its kind in orbit, shooting out a pair of massive debris clouds and posing a slight risk to the international space station. NASA said it will take weeks to determine the full magnitude of the crash, which occurred nearly 500 miles over Siberia on Tuesday.
NASA believes any risk to the space station and its three astronauts is low. It orbits about 270 miles below the collision course. There also should be no danger to the space shuttle set to launch with seven astronauts on Feb. 22, officials said, but that will be re-evaluated in the coming days.
The collision involved an Iridium commercial satellite, which was launched in 1997, and a Russian satellite launched in 1993 and believed to be nonfunctioning.
These collision points to the credibility of the scientists which work behind that.
Only god can save world..
for details
http://www.aviransplace.com/2009/02/12/two-big-satellites-collide-500-miles-over-siberia/
http://www.burlingtonfreepress.com/article/20090211/NEWS/90211032/-1/rss
http://www.thestandard.com/news/2009/02/11/2-orbiting-satellites-collide-500-miles