Saturday, November 20, 2010

primary screening


INTRODUCTION:-
Isolation of Microorganisms - The first step in developing a producer strain is the isolation of concerned microorganisms from their natural habitats. Alternatively, microorganisms can be obtained as pure cultures from organisation, which maintain culture collections, e.g.,
American Type Culture Collection (ATCC). Rockville, Maryland, U.S.A., Commonwealth Mycological Institute (CMI), Kew, Surrey, England, Fermentation Research Institute
(FERM), Tokyo, Japan, U.S.S.R. Research Institute for Antibiotics (RIA), Moscow, U.S.S.R., etc.The microorganisms of industrial importance are, generally, bacteria, actinomycetes, fungi and algae. These organisms occur virtually everywhere, e.g., in air, water, soil, surfaces of plants and animals, and plant and animals tissues. But most common sources of industrial microorganisms are soils, and lake and river mud.
Often the ecological habitat from which a desired microorganism is more likely to be isolated will depend on the characteristics of the product desired from it, and of process development. For example, if the objective is to isolate a source of enzymes, which can withstand high temperatures, the obvious place to look will be hot water springs. A variety of complex isolation procedures have been developed, but no single method can reveal all the microorganisms present in a sample. Many different microorganisms can be isolated by using specialized enrichment techniques, e.g., soil treatment (UV irradiation, air drying or heating at 70120°C, filtration or continuous percolation, washings from root systems, treatment with detergents or alcohols, preinoculation with toxic agents), selective inhibitors (antimetabolites, antibiotics, etc.), nutritional (specific C and N sources), variations in pH, temperature, aeration, etc.The enrichment techniques are designed for selective multiplication of only some of the microorganisms present in a sample. These approaches however take a long time (20-40 days), and require considerable labour and money.
                              The main isolation methods used routinely for isolation from soil samples are: sponging (soil directly), dilution, gradient plate, aerosol dilution, flotation, and differential centrifugation. Often these methods are used in conjunction with an enrichment technique.
Screening of Microorganisms for New Products - The next step after isolation of microorganisms is their screening. A set of highly  selective procedures, which allows the detection and isolation of microorganisms producing the desired metabolite, constitutes primary
screening.
                                                                      
SCREENING
In Microbial Technology Microorganisms holds the key to the success or failure of a fermentation process. It is therefore important to select the most suitable microorganisms to carry out the desired industrial process.
The most important factor for the success of any fermentation industry is of a production strain. It is highly desirable to use a production strain possessing the following four characteristics:
It should be high-yielding strain.
It should have stable biochemical/ genetical characteristics.
It should not produce undesirable substances.
It should be easily cultivated on large-scale.

Def:
Detection and isolation of high-yielding species form the natural sources material, such as soil, containing a heterogeneous microbial population is called Screening
OR
Screening may be defined as the use of highly selective procedures to allow the detection and isolation of only those microorganisms of interest from among a large microbial population.

Thus to be effective, screening must, in one or a few steps allow the discarding of many valueless microorganisms, while at the same time allowing the easy detection of the small percentage of useful microorganisms that are present in the population.
The concept of screening will be illustrated by citing specific examples of screening procedures that are or have been commonly employed in industrial research programs.
During screening programs except crowded plate technique a natural source such as soil is diluted to provide a cell concentration such that aliquots spread, sprayed or applied in some manner to the surface of the agar plates will yield well isolated colonies (30-300).

Primary screening of Organic acid/ amine producer:-
· For primary screening of organic acid or organic amine producers, soil sample is taken as a source of microorganism.
· It is diluted serially to an extent to get well-isolated colonies on the plate when spread or applied in some form.
· After preparation of dilution these dilutions are applied on a media incorporated with a pH indicating dye such as Neutral red (Pink to yellow)or Bromothymol blue (Yellow -blue), into a poorly buffered agar nutrient medium. The production of these compounds is indicated by a change in the color of the indicating dye in the close vicinity of the colony to a color representing an acidic or alkaline reaction.
 Fig.detection of fungi capable of producing organic acids by incorporation of CaCo3 into the agar medium.

· The usefulness of this procedure is increased if media of greater buffer capacity are utilized so that only those microorganisms that produce considerable quantities of the acid or amine can induce changes in the color of the dye.
An alternative procedure for detecting organic acid production involves the incorporation of calcium carbonate (1-2 %) in the medium so that organic acid production is indicated by a cleared zone of dissolved calcium carbonate around the colony. These procedures are not foolproof, however, since inorganic acids or bases also are potential products of microbial growth. For instance, if the nitrogen source of the medium is the nitrogen of ammonium sulfate the organism may utilize the ammonium ion, leaving behind the sulfate ion as sulfuric acid, a condition indistinguishable form organic acid production. Thus cultures yielding positive reactions require further testing to be sure that an organic acid or base actually has been produced.


Primary screening of antibiotic producer (Crowded plate technique):
Fig.cwowded plate screening for antibiotic producing microorganisms .Notice the  inhabition of growth to several of the colonies.

· The crowded plate technique is the simplest screening technique employed in detecting and isolating antibiotic producers.
· It consists of preparing a series of dilution of the source material for the antibiotic producing microorganisms, followed by spreading the dilution on the agar plates.
· The agar plates having 300- 400 or more colonies per plate after incubation for 2-4 days are observed since they are helpful in locating the colonies producing antibiotic activity.
· Colonies showing antibiotic activity is indicated by the presence of a zone of inhibition (arrow in fig) surrounding the colony.
· Such a colony is sub- cultured to a similar medium and purified.
· It is necessary to carry on further testing to confirm the antibiotic activity associated with a microorganism since zone of inhibition surrounding the colony may sometimes be due to other causes. Notable among these are a marked change in the pH value of the medium resulting from the metabolism of the colony, or rapid utilization of critical nutrients in the immediate vicinity of the colony.
· Thus, further testing again is required to prove that the inhibitory activity associated with a microorganism can really be attributed to the presence of an antibiotic.
The crowded plate technique has limited application, since usually we are interested in finding a microorganism producing antibiotic activity against specific microorganism and not against the unknown microorganism that were by chance on the plate in the vicinity of an antibiotic producing organism. Antibiotic screening is improved, therefore by the incorporation into the procedure of a “Test organism” that is an organism used as an indicator for the presence of specific antibiotic activity.
Dilutions of soil or of other microbial sources are applied to the surface of agar plates so that well isolated colonies will develop. The plates are incubated until the colonies are a few millimeters in diameter and so that antibiotic production will have occurred for those organisms having this potential. A suspension of test organism is then sprayed or applied in some manner to the surface of the agar and the plates are further incubated to allow growth of the test organism. Antibiotic activity is indicated by zones of inhibited growth of the organism around antibiotic producing colonies. In addition a rough approximation of the relative amount of antibiotic produced by barious colonies can be gained by measuring in mm the diameters of the zones of inhibited test organism growth. Antibiotic producing colonies again must be isolated and purified before further testing.

Primary screening of growth factor (Amino acid/ Vit) producer (Auxanography):

This technique is largely employed for detecting microorganisms able to produce growth factors (eg. Amino acid and Vitamins) extracellularly. The two major steps are as follows:
Step I
A filter paper strip is kept across the bottom of a petri dish in such a way that the two ends pass over the edge of the dish.
A filter paper disc of petri dish size is placed over paper strip on the bottom of the plate.
The nutrient agar is poured on the paper disc in the dish and allowed to solidify.
Microbial source material such as soil, is subjected to dilution such that aliquots on plating will produce well isolated colonies.
Plating of aliquots of properly diluted soil sample is done.
Step II
A minimal medium lacking the growth factor under consideration is seeded with the test organism.
The seeded medium is poured on the surface of a fresh petri dish and allowed to solidify.
The agar in the first plate as prepared in step- I is carefully and aseptically lifted out with the help of tweezers and a spatula and placed without inverting on the surface of the second plate as prepared in the second step.
The growth factor(s) produced by colonies present on the surface of the first layer of agar can diffuse into the lower layer of agar containing the test organism. The zone of stimulated growth of the test organism around the colonies is an indication that they produce growth factor(s) extracellularly. Productive colonies are sub cultured and are further tested.
OR
A similar screening approach can be used to find microorganisms capable of synthesizing extracellular vitamins, amino acids or other metabolites. However, the medium at makeup must be totally lacking in the metabolite under consideration. Again the microbial source is diluted and plated to provide well-isolated colonies and the test organism is applied to the plates before further incubation. The choice of the particular test organism to be used is critical. It must possess a definite growth requirement for the particular metabolite and for that metabolite only, so that production of this compound will be indicated by zones of growth or at least increased growth of the test organism adjacent to colonies that have produced the metabolite.

Enrichment culture technique:
This technique was designed by a soul microbiologist, Beijerinck, to isolate the desired microorganisms form a heterogeneous microbial population present in soil. Either medium or incubation conditions are adjusted so as to favour the growth of the desired microorganism. On the other hand, unwanted microbes are eliminated or develop poorly since they do not find suitable growth conditions in the newly created environment. Today this technique has become a valuable tool in many screening program for isolating industrially important strains.
 
Refrences:-

Ø  INDUSTRIAL MICROBIOLOGY. By ah Rose. Butterworths,
Ø  Industrial microbiology General account :maintenance of stock cultures, ... Industrial Microbiology - Casida, Wiley Eastern publishers, 1994. ...
Ø  Industrial Microbiology. Wiley Eastern Ltd. New Delhi. Cruckshank R, Dugnid jp .... Industrial Microbiology. Mac Millan Pub. Ltd. Wisconsin. ...

Monday, November 15, 2010

genetic algorithm

The genetic algorithm (GA) is a search heuristic that mimics the process of natural evolution. This heuristic is routinely used to generate useful solutions to optimization and search problems. Genetic algorithms belong to the larger class of evolutionary algorithms (EA), which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover.

Contents

[hide]

[edit] Methodology

In a genetic algorithm, a population of strings (called chromosomes or the genotype of the genome), which encode candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem, evolves toward better solutions. Traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible. The evolution usually starts from a population of randomly generated individuals and happens in generations. In each generation, the fitness of every individual in the population is evaluated, multiple individuals are stochastically selected from the current population (based on their fitness), and modified (recombined and possibly randomly mutated) to form a new population. The new population is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. If the algorithm has terminated due to a maximum number of generations, a satisfactory solution may or may not have been reached.
Genetic algorithms find application in bioinformatics, phylogenetics, computational science, engineering, economics, chemistry, manufacturing, mathematics, physics and other fields.
A typical genetic algorithm requires:
  1. a genetic representation of the solution domain,
  2. a fitness function to evaluate the solution domain.
A standard representation of the solution is as an array of bits. Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simple crossover operations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored in genetic programming and graph-form representations are explored in evolutionary programming.
The fitness function is defined over the genetic representation and measures the quality of the represented solution. The fitness function is always problem dependent. For instance, in the knapsack problem one wants to maximize the total value of objects that can be put in a knapsack of some fixed capacity. A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the knapsack. Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack. The fitness of the solution is the sum of values of all objects in the knapsack if the representation is valid, or 0 otherwise. In some problems, it is hard or even impossible to define the fitness expression; in these cases, interactive genetic algorithms are used.
Once we have the genetic representation and the fitness function defined, GA proceeds to initialize a population of solutions randomly, then improve it through repetitive application of mutation, crossover, inversion and selection operators.

[edit] Initialization

Initially many individual solutions are randomly generated to form an initial population. The population size depends on the nature of the problem, but typically contains several hundreds or thousands of possible solutions. Traditionally, the population is generated randomly, covering the entire range of possible solutions (the search space). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found.

[edit] Selection

During each successive generation, a proportion of the existing population is selected to breed a new generation. Individual solutions are selected through a fitness-based process, where fitter solutions (as measured by a fitness function) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as this process may be very time-consuming.
Most functions are stochastic and designed so that a small proportion of less fit solutions are selected. This helps keep the diversity of the population large, preventing premature convergence on poor solutions. Popular and well-studied selection methods include roulette wheel selection and tournament selection.

[edit] Reproduction

The next step is to generate a second generation population of solutions from those selected through genetic operators: crossover (also called recombination), and/or mutation.
For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents". New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated. Although reproduction methods that are based on the use of two parents are more "biology inspired", some research[1][2] suggests more than two "parents" are better to be used to reproduce a good quality chromosome.
These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions, for reasons already mentioned above.
Although Crossover and Mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction, or migration in genetic algorithms.[2]

[edit] Termination

This generational process is repeated until a termination condition has been reached. Common terminating conditions are:
  • A solution is found that satisfies minimum criteria
  • Fixed number of generations reached
  • Allocated budget (computation time/money) reached
  • The highest ranking solution's fitness is reaching or has reached a plateau such that successive iterations no longer produce better results
  • Manual inspection
  • Combinations of the above
Simple generational genetic algorithm pseudocode
  1. Choose the initial population of individuals
  2. Evaluate the fitness of each individual in that population
  3. Repeat on this generation until termination: (time limit, sufficient fitness achieved, etc.)
    1. Select the best-fit individuals for reproduction
    2. Breed new individuals through crossover and mutation operations to give birth to offspring
    3. Evaluate the individual fitness of new individuals
    4. Replace least-fit population with new individuals

[edit] The building block hypothesis

Genetic algorithms are simple to implement, but their behavior is difficult to understand. In particular it is difficult to understand why these algorithms frequently succeed at generating solutions of high fitness when applied to practical problems. The building block hypothesis (BBH) consists of:
  1. A description of a heuristic that performs adaptation by identifying and recombining "building blocks", i.e. low order, low defining-length schemata with above average fitness.
  2. A hypothesis that a genetic algorithm performs adaptation by implicitly and efficiently implementing this heuristic.
Goldberg describes the heuristic as follows:
"Short, low order, and highly fit schemata are sampled, recombined [crossed over], and resampled to form strings of potentially higher fitness. In a way, by working with these particular schemata [the building blocks], we have reduced the complexity of our problem; instead of building high-performance strings by trying every conceivable combination, we construct better and better strings from the best partial solutions of past samplings.
"Because highly fit schemata of low defining length and low order play such an important role in the action of genetic algorithms, we have already given them a special name: building blocks. Just as a child creates magnificent fortresses through the arrangement of simple blocks of wood, so does a genetic algorithm seek near optimal performance through the juxtaposition of short, low-order, high-performance schemata, or building blocks."[3]

[edit] Criticism of the building block hypothesis

The building block hypothesis has been sharply criticized on the grounds that it lacks theoretical justification, and experimental results have been published that draw the veracity of this hypothesis into question. On the theoretical side, for example, Wright et al. state that
"The various claims about GAs that are traditionally made under the name of the building block hypothesis have, to date, no basis in theory and, in some cases, are simply incoherent."[4]
On the experimental side uniform crossover was seen to outperform one-point and two-point crossover on many of the fitness functions studied by Syswerda.[5] Summarizing these results, Fogel remarks that
"Generally, uniform crossover yielded better performance than two-point crossover, which in turn yielded better performance than one-point crossover."[6]
Syswerda's results contradict the building block hypothesis because uniform crossover is highly disruptive of short schemata, whereas one and two-point crossover are much less disruptive. Given these problems with the building block hypothesis, the adaptive capacity of genetic algorithms is currently something of a mystery.

[edit] Observations

There are several general observations about the generation of solutions specifically via a genetic algorithm:
  • Selection is clearly an important genetic operator, but opinion is divided over the importance of crossover versus mutation. Some argue[who?] that crossover is the most important, while mutation is only necessary to ensure that potential solutions are not lost. Others argue[who?] that crossover in a largely uniform population only serves to propagate innovations originally found by mutation, and in a non-uniform population crossover is nearly always equivalent to a very large mutation (which is likely to be catastrophic). There are many references in Fogel (2006) that support the importance of mutation-based search, but across all problems the No Free Lunch theorem holds, so these opinions are without merit[citation needed] unless the discussion is restricted to a particular problem.
  • As with all current machine learning problems it is worth tuning the parameters such as mutationcrossover probability and population size to find reasonable settings for the problem class being worked on. A very small mutation rate may lead to genetic drift (which is non-ergodic in nature). A recombination rate that is too high may lead to premature convergence of the genetic algorithm. A mutation rate that is too high may lead to loss of good solutions unless there is elitist selection. There are theoretical[citation needed] but not yet practical upper and lower bounds for these parameters that can help guide selection. probability,

[edit] Criticisms

There are several criticisms of the use of a genetic algorithm compared to alternative optimization algorithms:
  • Repeated fitness function evaluation for complex problems is often the most prohibitive and limiting segment of artificial evolutionary algorithms. Finding the optimal solution to complex high dimensional, multimodal problems often requires very expensive fitness function evaluations. In real world problems such as structural optimization problems, one single function evaluation may require several hours to several days of complete simulation. Typical optimization methods can not deal with such types of problem. In this case, it may be necessary to forgo an exact evaluation and use an approximated fitnessapproximate models may be one of the most promising approaches to convincingly use GA to solve complex real life problems. that is computationally efficient. It is apparent that amalgamation of
  • The "better" is only in comparison to other solutions. As a result, the stop criterion is not clear in every problem.
  • In many problems, GAs may have a tendency to converge towards local optima or even arbitrary points rather than the global optimum of the problem. This means that it does not "know how" to sacrifice short-term fitness to gain longer-term fitness. The likelihood of this occurring depends on the shape of the fitness landscape: certain problems may provide an easy ascent towards a global optimum, others may make it easier for the function to find the local optima. This problem may be alleviated by using a different fitness function, increasing the rate of mutation, or by using selection techniques that maintain a diverse population of solutions, although the No Free Lunch theorem[7][citation needed] that there is no general solution to this problem. A common technique to maintain diversity is to impose a "niche penalty", wherein, any group of individuals of sufficient similarity (niche radius) have a penalty added, which will reduce the representation of that group in subsequent generations, permitting other (less similar) individuals to be maintained in the population. This trick, however, may not be effective, depending on the landscape of the problem. Another possible technique would be to simply replace part of the population with randomly generated individuals, when most of the population is too similar to each other. Diversity is important in genetic algorithms (and genetic programming) because crossing over a homogeneous population does not yield new solutions. In evolution strategies and evolutionary programming, diversity is not essential because of a greater reliance on mutation. proves
  • Operating on dynamic data sets is difficult, as genomes begin to converge early on towards solutions which may no longer be valid for later data. Several methods have been proposed to remedy this by increasing genetic diversity somehow and preventing early convergence, either by increasing the probability of mutation when the solution quality drops (called triggered hypermutation), or by occasionally introducing entirely new, randomly generated elements into the gene pool (called random immigrants). Again, evolution strategies and evolutionary programming can be implemented with a so-called "comma strategy" in which parents are not maintained and new parents are selected only from offspring. This can be more effective on dynamic problems.
  • GAs cannot effectively solve problems in which the only fitness measure is a single right/wrong measure (like decision problems), as there is no way to converge on the solution (no hill to climb). In these cases, a random search may find a solution as quickly as a GA. However, if the situation allows the success/failure trial to be repeated giving (possibly) different results, then the ratio of successes to failures provides a suitable fitness measure.

[edit] Variants

The simplest algorithm represents each chromosome as a bit string. Typically, numeric parameters can be represented by integers, though it is possible to use floating point representations. The floating point representation is natural to evolution strategies and evolutionary programming. The notion of real-valued genetic algorithms has been offered but is really a misnomer because it does not really represent the building block theory that was proposed by Holland in the 1970s. This theory is not without support though, based on theoretical and experimental results (see below). The basic algorithm performs crossover and mutation at the bit level. Other variants treat the chromosome as a list of numbers which are indexes into an instruction table, nodes in a linked list, hashes, objects, or any other imaginable data structure. Crossover and mutation are performed so as to respect data element boundaries. For most data types, specific variation operators can be designed. Different chromosomal data types seem to work better or worse for different specific problem domains.
When bit-string representations of integers are used, Gray coding is often employed. In this way, small changes in the integer can be readily effected through mutations or crossovers. This has been found to help prevent premature convergence at so called Hamming walls, in which too many simultaneous mutations (or crossover events) must occur in order to change the chromosome to a better solution.
Other approaches involve using arrays of real-valued numbers instead of bit strings to represent chromosomes. Theoretically, the smaller the alphabet, the better the performance, but paradoxically, good results have been obtained from using real-valued chromosomes.
A very successful (slight) variant of the general process of constructing a new population is to allow some of the better organisms from the current generation to carry over to the next, unaltered. This strategy is known as elitist selection.
Parallel implementations of genetic algorithms come in two flavours. Coarse-grained parallel genetic algorithms assume a population on each of the computer nodes and migration of individuals among the nodes. Fine-grained parallel genetic algorithms assume an individual on each processor node which acts with neighboring individuals for selection and reproduction. Other variants, like genetic algorithms for online optimization problems, introduce time-dependence or noise in the fitness function.
Genetic algorithms with adaptive parameters (adaptive genetic algorithms, AGAs) is another significant and promising variant of genetic algorithms. The probabilities of crossover (pc) and mutation (pm) greatly determine the degree of solution accuracy and the convergence speed that genetic algorithms can obtain. Instead of using fixed values of pc and pm, AGAs utilize the population information in each generation and adaptively adjust the pc and pm in order to maintain the population diversity as well as to sustain the convergence capacity. In AGA (adaptive genetic algorithm),[8] the adjustment of pc and pm depends on the fitness values of the solutions. In CAGA (clustering-based adaptive genetic algorithm),[9] through the use of clustering analysis to judge the optimization states of the population, the adjustment of pc and pm depends on these optimization states. The GEGA program is an ab initio gradient embedded GA, a program for finding the global minima of clusters developed by Anastassia Alexandrova at Utah State University. GEGA employs geometry-cuts for the GA, ab initio level of computation for geometry optimization and vibrational frequency analysis, with local minima only, and a specific mutational procedure based on the so called "kick technique".[10]
It can be quite effective to combine GA with other optimization methods. GA tends to be quite good at finding generally good global solutions, but quite inefficient at finding the last few mutations to find the absolute optimum. Other techniques (such as simple hill climbing) are quite efficient at finding absolute optimum in a limited region. Alternating GA and hill climbing can improve the efficiency of GA while overcoming the lack of robustness of hill climbing.
This means that the rules of genetic variation may have a different meaning in the natural case. For instance – provided that steps are stored in consecutive order – crossing over may sum a number of steps from maternal DNA adding a number of steps from paternal DNA and so on. This is like adding vectors that more probably may follow a ridge in the phenotypic landscape. Thus, the efficiency of the process may be increased by many orders of magnitude. Moreover, the inversion operator has the opportunity to place steps in consecutive order or any other suitable order in favour of survival or efficiency. (See for instance [11] or example in travelling salesman problem.)
A variation, where the population as a whole is evolved rather than its individual members, is known as gene pool recombination.

[edit] Problem domains

Problems which appear to be particularly appropriate for solution by genetic algorithms include timetablingscheduling problems, and many scheduling software packages are based on GAs. GAs have also been applied to engineering. Genetic algorithms are often applied as an approach to solve global optimization problems. and
As a general rule of thumb genetic algorithms might be useful in problem domains that have a complex fitness landscape as crossover is designed to move the population away from local optima that a traditional hill climbing algorithm might get stuck in.
Examples of problems solved by genetic algorithms include: mirrors designed to funnel sunlight to a solar collector, antennae designed to pick up radio signals in space, and walking methods for computer figures. Many of their solutions have been highly effective, unlike anything a human engineer would have produced, and inscrutable as to how they arrived at that solution.

[edit] History

Computer simulations of evolution started as early as in 1954 with the work of Nils Aall Barricelli, who was using the computer at the Institute for Advanced Study in Princeton, New Jersey.[12][13] His 1954 publication was not widely noticed. Starting in 1957,[14] the Australian quantitative geneticist Alex Fraser published a series of papers on simulation of artificial selection of organisms with multiple loci controlling a measurable trait. From these beginnings, computer simulation of evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970)[15] and Crosby (1973).[16]Hans-Joachim Bremermann published a series of papers in the 1960s that also adopted a population of solution to optimization problems, undergoing recombination, mutation, and selection. Bremermann's research also included the elements of modern genetic algorithms.[17] Other noteworthy early pioneers include Richard Friedberg, George Friedman, and Michael Conrad. Many early papers are reprinted by Fogel (1998).[18] Fraser's simulations included all of the essential elements of modern genetic algorithms. In addition,
Although Barricelli, in work he reported in 1963, had simulated the evolution of ability to play a simple game,[19] artificial evolution became a widely recognized optimization method as a result of the work of Ingo Rechenberg and Hans-Paul Schwefel in the 1960s and early 1970s – Rechenberg's group was able to solve complex engineering problems through evolution strategies.[20][21][22][23] Another approach was the evolutionary programming technique of Lawrence J. Fogel, which was proposed for generating artificial intelligence. Evolutionary programming originally used finite state machines for predicting environments, and used variation and selection to optimize the predictive logics. Genetic algorithms in particular became popular through the work of John Holland in the early 1970s, and particularly his book Adaptation in Natural and Artificial Systems (1975). His work originated with studies of cellular automata, conducted by Holland and his students at the University of Michigan. Holland introduced a formalized framework for predicting the quality of the next generation, known as Holland's Schema Theorem. Research in GAs remained largely theoretical until the mid-1980s, when The First International Conference on Genetic Algorithms was held in Pittsburgh, Pennsylvania.
As academic interest grew, the dramatic increase in desktop computational power allowed for practical application of the new technique. In the late 1980s, General Electric started selling the world's first genetic algorithm product, a mainframe-based toolkit designed for industrial processes. In 1989, Axcelis, Inc. released Evolver, the world's first commercial GA product for desktop computers. The New York TimesJohn Markoff wrote[24] about Evolver in 1990. technology writer

Sunday, October 3, 2010

Hi my name is junaid and im a biotech student from lpu!!