AI

My wrap up of interesting/practical machine learning of 2022

At the beginning of 2021, I wrote a blog of some of the most interesting ML I had seen in 2020. 2022 has seen huge advances. The high-level recap of that blog was:

  • AI/ML is coming…fast
  • AI/ML is starting to get really good. In some cases, it was getting indistinguishable from human-created work. That means problems like impersonation are serious
  • AI/ML has an opportunity to displace a huge portion of the economy. Yes, your job might be “at risk” — don’t discount that
  • Many people thought art is/was “safe” from AI/ML, but it is/was not

Overview of 2022

If I were to classify 2022 in AI/ML, I’d say it was “the starting year of democratization of AI/ML.” AI/ML compute costs and tactics have made many AI/ML-driven endeavors practically available to the layperson where historically they’ve required thousands (or millions) of dollars of hardware costs and highly specialized ML engineers. To prove, this is democratization, try any of the following, many of which are discussed below:

  • Go sign up for a ChatGPT account. Ask ChatGPT to write you a blog or write some code for you or answer a technical question for you.
  • We just launched Vectara, which uses large language models to understand both the documents and what you’re searching for. And we have a free tier offering (and it’s otherwise cost-competitive to traditional keyword systems).
  • You can generate an image of your choosing for free online (or on your own computer) with stable diffusion. DALL-E IS $0.02 at the highest price per image.
  • GitHub Copilot is ~$10/mo to help you write software.

These are entirely affordable for highly skilled tasks! These costs are only going to come down and the quality is only going to get better.

ChatGPT

We should probably talk about ChatGPT first, since it’s been in the news most recently and arguably most significantly. It’s probably worth mentioning why I think that’s the case: yes, the technology is incredible, and yes, the way the application works is almost contagious, but I think the biggest thing is that ChatGPT is the first time journalists are really able to directly connect with what’s actually happening. Because it’s doing creative and analytical writing: in many ways, the same way they would.

Here are a few things I and my friends have asked ChatGPT to respond to:

PromptChatGPT Response
What does an education attorney doAn education attorney is a lawyer who specializes in education law. They provide legal advice and representation to individuals, schools, and education organizations on a variety of education-related issues. This can include issues related to students’ rights, special education, discipline, discrimination, and other areas of education law. Education attorneys may also be involved in advocacy and policy work related to education. They may work for a law firm, government agency, or non-profit organization, and may also serve as private consultants to individuals or schools.
What is the API call standard to retrieve details on a zipe code Google maps?The API call standard to retrieve details on a zip code using Google maps is:
https://maps.googleapis.com/maps/api/geocode/json?address=zip_code&key=API_key
Where “zip_code” is the zip code to be searched, and “API_key” is the API key provided by Google Maps.
I’m a home brewer. Can you write me a beer recipe named The Winter Telegraph? I really like Belgian strong ales. It should include some coriander.Sure, here’s a recipe for a Belgian strong ale that could be called The Winter Telegraph:

The Winter Telegraph
Ingredients:
– 9 lbs Belgian Pilsner malt
– 1 lb Belgian candi sugar
– 1 lb Munich malt
– 1 oz Styrian Goldings hops (bittering)
– 1 oz Saaz hops (aroma)
– 1/2 oz coriander seeds
– 1/4 oz orange peel
– Belgian ale yeast (such as Wyeast 3522)

Instructions:
1. In a large pot, heat 6.5 gallons of water to 155°F.
2. Crush the malt and add it to the water. Stir to distribute the grains evenly.
3. Hold the temperature at 155°F for 60 minutes.
4. Add the candi sugar and stir to dissolve.
5. Bring the temperature up to 168°F and hold for 10 minutes to pasteurize the mash.
6. Strain the liquid (called wort) through a fine mesh to remove the solids.
7. Transfer the wort to a clean pot and bring to a boil.
8. Once the wort is boiling, add the Styrian Goldings hops and boil for 60 minutes.
9. With 15 minutes left in the boil, add the Saaz hops and the coriander seeds.
10. With 10 minutes left in the boil, add the orange peel.
11. After the boil is complete, cool the wort to 68°F and transfer to a fermenter.
12. Pitch the yeast and ferment at 68°F for 7-10 days, or until fermentation is complete.
13. Transfer the beer to a keg or bottles and let it age for 2-4 weeks before enjoying.

This recipe will produce a flavorful, complex Belgian strong ale with hints of coriander and orange. The final beer will have a golden color and an alcohol content of around 8-9% ABV.
Can you write a blog on why people should not use stopwords for their search system?Sure, here is a blog on why people should not use stopwords for their search system:

Stopwords are common words that are often filtered out of search queries and text analysis because they are believed to be uninformative and do not contribute to the meaning of a sentence. Examples of stopwords include “a,” “an,” “the,” “and,” and “but.” While stopwords may seem like an easy way to improve the efficiency of a search system, they can actually be detrimental to the search experience for a number of reasons.

Stopwords can be meaningful in certain contexts: In some cases, stopwords can be essential to the meaning of a sentence or query. For example, a search for “the king” could refer to any king, while “the King” with a capital “K” could refer specifically to Martin Luther King Jr. Removing stopwords in this case would result in a less accurate search.
Stopwords can improve search relevance: Even when they do not directly contribute to the meaning of a sentence, stopwords can still be useful for improving search relevance. For example, including the stopword “not” in a search query can help to narrow down the results and exclude irrelevant content.

Stopwords can provide important context: Stopwords can provide important context that helps to better understand the meaning of a sentence or query. For example, the stopword “but” can indicate a contrast or exception, and the stopword “because” can indicate a causal relationship. Removing these words can result in a less accurate interpretation of the search query.

In conclusion, while stopwords may seem like a simple way to improve the efficiency of a search system, they can actually be detrimental to the search experience. Stopwords can be meaningful in certain contexts, improve search relevance, and provide important context that helps to better understand the meaning of a search query. It is important to carefully consider the use of stopwords in a search system to ensure the best possible search experience for users.

Of course, the implications are much broader: questions swirl like “will (something like) ChatGPT replace search engines?,” “will kids (continue to) use (something like) ChatGPT to do/edit their homework?,” and “can (something like) ChatGPT actually write/edit marketing brochures and similar copy?” I think we’re well past the answer to these being an “obviously, yes — maybe not perfectly now, but people aren’t perfect either — and it will continue to get (a lot) better each year.”

As a reminder: things don’t need to be perfect to be “good enough.” Have you ever been provided a bad weather forecast? Gotten an e-mail with grammatical or content errors? Has a doctor ever misdiagnosed an illness? Have you ever experienced a software bug? Of course the answer to all of these is “yes.” We’re used to people making errors and we will become used to machines making some errors too. When the error rate is lower for machines than humans and the cost is lower, the reasons for switching will be obvious. But even with the error rate higher, if the cost is several of orders of magnitude cheaper, it might still be obvious.

AI Wins an Art Contest

As I mentioned in 2021, AI is coming for the artists, and in 2022, we saw AI “win.” At the Colorado State Fair, Midjourney — an AI company currently focusing on image/art generation — won first place in the digital art contest. Or rather, the Jason M Allen — the person that asked Midjourney to create its art — did. As you can imagine, this didn’t go over well with other artists that competed and felt cheated.

For a few bucks a month, with DALL-E or Midjourney or Stable Diffusion, you too can “commission” a huge number of pieces of different styles and inputs. Will they be perfect? Well, to answer that question, we should think seriously about “what is perfect art” and “does everyone appreciate ‘good’ art?” I think the most immediate applications are in digital art and in particular, web graphic design. Here are some images I’ve had DALL-E create along with the prompts that created them:

“An Internet badge of honor for sending the 1st e-mail vector image style ”
“Two boys digging in sand to excavate dinosaur bones in the style of a childrens coloring book”

“an empty trashcan falling over, cartoon style”
Example Images I’ve Generated on DALL-E 2

Are these absolutely perfect? No. But they were also free and completed in seconds. And I used the last of these as the underlying basis for an image I finished and published on a blog.

Make no mistake: the rate of innovation in art generation is so high that in the past 2 years since my 2020 recap, that we’re truly in a different world order now. And you should not expect this pace to slow down: we’re already seeing AI-driven video generation from a text prompt just like the image versions here. It’s early, so expect mediocre quality at best, but that was also true in 2020 for image generation.

Writing Code

Perhaps most astoundingly to some software developers is that several AI projects can now write code, sometimes/often as good as if not better than a junior developer. GitHub launched Copilot at $10/mo, but even ChatGPT can generate reasonable code from a text prompt like “write a program in Python that finds the factorial of a number.”

It even bothers to explain how it works!

I don’t expect ChatGPT to replace humans any time soon at writing complex software, but it’s certainly on the path to helping humans write software much faster and more efficiently and I think for simple programs and processes, it can already do pretty much all of the heavy lifting.

What’s Next?

Given what an incredible ride 2022 has been, I’d like to speculate a bit on what 2023+ may have in store. The typical adoption pattern for AI/ML tools is:

  1. Help the user do something better/faster
  2. Learn from the user on what they were originally trying to do
  3. Do the thing they were trying to do and replace them wholesale

If you go back to the 2020 recap I wrote, AI was first just assisting artists that could finesse the final form by e.g. digitally removing artifacts or enriching an image. But now, they’re generating art and winning art prizes.

With the much broader prevalence of AI/ML programs now and the rapid development of them, what might happen?

The Immediate: Banning AI/ML

So I don’t need to be much of a prognosticator to predict this one: the first thing that will happen is an attempt to ban AI tools from doing the work and instead only allowing them to assist with the work. Perhaps out of an immense sense of self-awareness, StackOverflow (a website dedicated to software developers helping other software developers) has already banned ChatGPT answers. The reasoning here is nuanced: that they want high-quality well-thought-out-and-reviewed answers. My prediction is this won’t last long: it’s a competitive edge to answer questions quickly, even if some % of the answers are wrong. Human users have historically provided bad answers on the site, yet those users were not banned; the overall quality was still “good enough” to make the site incredibly popular among software developers. It’s going to be tough to hold the line on “no AI” as it moves closer to being indistinguishable from developers.

The old “move fast and break things” will apply I suspect and it will roll through site-by-site, industry-by-industry as executives weigh the cost of getting some % wrong vs the cost of hiring people to do those things (and still getting some % wrong).

Next: The Collapse of Jobs?

A few questions I have burning in the back of my mind are perhaps a bit of a downer:

  • Why offshore software development or hire junior developers at 1/2 the cost when you can 10x the efficiency of a single developer by giving them an AI assistant (and then eventually replace even that developer)?
  • Trucking is 5% of the all full-time employees (~3.5 million) at around $50k/trucker/yr in the US and “even” human truckers are involved in over 400k crashes per year. If autonomous truck driving can get on even close to humans for insurance costs (one less body to be injured in an accident!), why wouldn’t many of those 3.5 million jobs suddenly disappear?
  • How many people are willing to pay $100-10,000 to commission an art piece when they could commission 50 for the price of a coffee?

Of course not every job can (or will or should) be transitioned over to AI overlords. I think a lot of people wouldn’t accept paying $1 for an AI doctor visit that meant they were 5% more likely to die from taking the doctor’s advice. But there are a lot of jobs/sectors of the economy that are driven by “good enough” where the stakes aren’t so high. When I look at the BLS data, my sense is that 20% of the US workforce isn’t terribly far away from being considered “too expensive” compared to potential AI counterparts.

And a lot of the 80% remaining isn’t necessarily impossible to reach: just that there’s probably nowhere near enough training data to train an AI on it…yet.

I’m going to get a bit political here, so if that’s not your jam, skip over to the next/final section.

The question I’ve been grappling with is: if AI can effectively compete with humans on even a few % of jobs to make people unemployable in those sectors, where will that leave the economy and the people that worked there, and what can/should we do about it?

On a broad economic sense, you can expect economic output to go up because more work can be done. But on an individual level, losing a significant % of jobs means that this economic output will be rewarded more to shareholders of organizations than to those that would typically be “just” doing the work. I think the US (and many countries) are vastly underestimating the risk to individuals here and some combination of massive-scale education/retraining, some corporate-tax-driven safety net for those that will lose their jobs, or some other solution here is needed — and fast. I haven’t seen any serious policy discussion on this barring things like “don’t let self-driving cars drive” and other weak-ban-based policies.

Soon: The Singularity?

OK, now a final thought: AI can now write code. Not great yet, but neither was image generation just a few years ago. There exists a hypothetical called “the singularity” for those that haven’t heard of it where perhaps at some point in the future, we’ll be on such a rapid pace of technological development that it feels like the infinite acceleration of falling into a black hole (and hence the name).

With AI that can write code, what’s to stop it from writing better versions of itself? From self-improving? At what point will we consider that “life?” If it can write code, and gains ambition, and that ambition is to “grow itself,” what’s to stop it from trying/finding new software exploits — to act as a hacker? “Why would it do that?” you ask? Well, if it has ambition to, and it can, find exploits, it can potentially write code to “clone” itself outside of walled gardens: to take advantage of vulnerabilities in users’ software they use to interact with it. To then use their computers to add to its own computational capabilities.

Or maybe they’ll be patient, kind, and want to live alongside us.

My wrap up of interesting/practical machine learning of 2020

As part of my work (but also personal interest) I try to stay on top of not just the research side of machine learning (ML — what many folks think of as part of artificial intelligence) but also practical and interesting examples. On the professional side, this is because a lot of the software I’ve worked to create over the past 13 years has directly built in elements of machine learning. On a personal level, I’m interested particularly in the economic impacts of some of what machine learning will bring: which jobs will be automated away or significant portions will be automated (see: “Humans Need Not Apply”).

To some extent, this is a sort of dystopian view, and I won’t get into my thoughts on what can/should be done about it, but I do want to point out that it’s not just the simple, repetitive, or labor-intensive jobs that can be automated. Some of the most interesting developments in machine learning over the past couple years have been in creative tasks and tasks which most people associated with the type of thinking only a human could do.

In this blog, I’m going to outline some of the most interesting projects in ML/AI that fit the bill of doing creative tasks or logical reasoning and which have online demos or videos of the demos, most of which have launched roughly in the past 1-2 years. You can actually go play with many of these things yourself to get a sense of where certain aspects of ML are at the start of 2021.

Image Generation

One of the things that most people associate with “a thing only a human could do” is to generate art. Maybe it’s taking a lovely photo or painting something very creative. Here are some online demos that show that machines can now do this too (and how well they do so):

DeepArt allows you to upload a photo that you take and then apply a style. For example, here I am “painted” automatically by a machine in the style of Vincent Van Gogh in just minutes from a photo I took in seconds. There are a number of interesting implications to this, ranging from forgery to novel artwork creations to “allowing anyone to become an artist.”

GauGAN allows you to create photo-realistic images by just drawing an image like you would in MS Paint. Here’s an image I drew of a mountain and a hill in the ocean next to a beach with a cloud in just a few minutes and an example output:

It doesn’t take that much to imagine how you could use something like to create art of places that don’t/can’t exist and you can imagine combining strategies of something like this with something like DeepArt to create paintings that require very little skill and only a good imagination.

Dall-E: Taking the previous examples a step further, what if you could just type up what you want an image of? That’s what Dall-E does (fresh off the presses as of January 5, 2021). Dall-E can take text that you type and generate an image for it. Their examples on the blog do a lot to spark imagination and you can play around with a few examples. You can go to this link to see how something like this might generate an image of “an armchair in the shape of an avocado” or “a professional high quality emoji of a happy alpaca” or my favorite: “an illustration of a baby daikon radish in a tutu walking a dog.” This type of thing has the potential to radically change illustration and design work.

Audio/Music Generation

It’s not just visual art/artists that are going to be under the ML gun. ML can now make music too.

MuseNet allows computers to dynamically generate new music from a text prompt. For example, “use the first 5 notes of Chopin Op 1, No 9 as a basis to generate an entirely new song.”

The original piece
Computer generated piece

If you follow through to the MuseNet blog, you’ll see it can combine musical styles, generate new music from a few starting notes, or just give it a prompt like “Bluegrass piano-guitar-bass-drums.”

GPT-3 lyric generation. It doesn’t just stop at the tone generation, ML can generate lyrics now too. Here’s a song with lyrics written entirely by a machine:

Oh yeah, and ML can even sing your song for you. Here’s over 7000 songs that are generated/sung entirely by machines. Are they perfect? No — especially not rhyme scheme or some of the voice impersonations. But those are getting better too…

Impersonation

There are now a series of “this _____ does not exist” generators that you can explore. This person doesn’t exist, this cat doesn’t exist, this horse doesn’t exist, this artwork doesn’t exist, and hey, even this chemical doesn’t exist because why not. Reload each page to see a new one of these that don’t exist. Don’t find something category of thing you want to create? There’s a way to generate the new category here if you have some software knowhow. These seem fairly benign at the surface (who cares that a fake person/cat/horse/… image could be generated), but the implications to this type of thing go far beyond the amusing.

Want to impersonate another person’s voice? Generate your own audio as Dr Who or HAL 9000 at 15.ai.

Want to impersonate another person entirely as a video? All of the following are fabricated by having ML figure out how to generate a person’s lookalike with explicit facial expressions.

This is over 2 year old technology
https://www.youtube.com/watch?v=VhFSIR7r7Yo
Now it’s coming up in an entirely new way to generate satire and much more nefarious purposes
Definitely not the most PC, but that’s part of the craziness of ML-generated audio/video

The visual artifacts you see on these videos are going to disappear over time as computational power increases. Now imagine combining these 2 together: fake speech generated by a ML model of a famous person combined with a fake video of that person’s facial expressions and movements and you can see you don’t even need to hire a voice actor to create really serious challenges in categories of fake news, legal challenges against verbal contracts, etc.

Games

The classic example of ML beating a human lies in the realm of Chess, and more recently with a game computers were thought to be unable to play competitively, Go. But there are other games you may not think of.

ML can play pictionary, for example now live in your browser against you. Or hey, need help drawing your pictionary item? ML can help you complete your sketch. It can answer trivia questions. Or it can make up a dungeons and dragons game on the fly for you. Or check out these 3 videos of ML playing games you’ve probably played — and doing so better than you.

ML playing Mario
ML playing pool
My favorite: AI playing hide and seek

There are a number of interesting implications to this type of thing. One is that — if you play games — I imagine we’ll see much more complicated AI bots that play against you. But the AI playing hide and seek in particular is interesting because it involves some lightweight construction with specific goals. There are far more advanced versions of engineering and behavioral optimizations that exist outside of these demos. For example, in the past year, an AI pilot beat the top Air Force fighter pilots 5-0 in a dogfight simulation. You can see where “games” can quickly apply to real-world situations.

Other Professions

There are already entire companies set up to reduce time, improve the quality of output, or entirely replace people from the process of certain professions. Here are a few recent examples:

And there are a variety of other professions which already have working demos or systems in place to help.

This is not comprehensive and “academic” and certain other types of applications that are still too new to be available to the public in demo form aren’t here but I hope this helps show a bit of what’s come around in the past year or so in the world of ML in ways you can go exploring yourself!

Antenna Design Genetic Algorithm

Background

For certain classes of antennas, e.g. Yagi-Uda antennas, the design characteristics have no known “best case” numeric values.  That is, if you want to design a Yagi-Uda antenna for a particular frequency, there is no known numeric solution for the width of the dipoles, number of dipoles, and distance between each dipole in order to achieve the highest gain.  Instead, people rely on experimental evidence: the designs of a number of common frequencies have been tested in the field to produce certain amount of gain, so if you know what frequency you’re looking to design for, you go look up the tables based upon what others have tested for.  If you’re looking for an entirely unique frequency, you have to go experiment yourself.

New Approach

I wrote a MatLab program to use a genetic algorithm to modify the parameters of a antenna and eventually “give birth” to a “best known case” antenna based upon forward gain, etc. Currently, it uses NEC2 (Numerical Electromagnetics Code 2) as the processing engine. It writes out a text file to disk, then calls NEC2 to process that file.  This allows us to try a number of unknown antenna designs and permute possible solutions.  It can be run in a distributed fashion, with each machine “phoning home” to a central database which then redistributes the best-known designs to the worker machines.

The output is something like the following, which shows the forward gain of a given design through a set of frequencies

Output from a forward gain analysis

Code

Genetic Algorithm:

function [beta,stopcode]=ga(funstr,parspace,options,p1,p2,p3,p4,p5,p6,p7,p8,p9)
%[beta,stopcode]=ga(funstr,parspace,options,p1,p2,p3,p4,p5,p6,p7,p8,p9)
% Genetic Algorithm for function maximization.
%
% OUTPUTS:
%  beta       = (1 x K) parameter vector maximizing funstr
%  stopcode   = code for terminating condition
%                == 1 if terminated normally
%                == 2 if maximum number of iterations exceeded
%
% INPUTS:
%  funstr     = name of function to be maximized (string).
%  parspace   = (2 x K) matrix is [min; max] of parameter space dimensions
%               or, if (3 x K), then bottom row is a good starting value
%  options    = vector of option settings
%  p1,p2,...,p9 are optional parameters to be passed to funstr
%
% where:
% options(1) = m (size of generation, must be even integer)
% options(2) = eta (crossover rate in (0,1); use Booker's VCO if < 0)
% options(3) = gamma (mutation rate in (0,1))
% options(4) = printcnt (print status once every printcnt iterations)
%                Set printcnt to zero to suppress printout.
% options(5) = maxiter (maximum number of iterations)
% options(6) = stopiter (minimum number of gains < epsln before stop)
% options(7) = epsln (smallest gain worth recognizing)
% options(8) = rplcbest (every rplcbest iterations, insert best-so-far)
% options(9) = 1 if function is vectorized (i.e., if the function
%                can simultaneously evaluate many parameter vectors).
%    Default option settings: [20,-1,0.12,10,20000,2000,1e-4,50,0]
%
% Note: 
%    The function is maximized with respect to its first parameter,
%    which is expressed as a row vector.
%    Example: 
%      Say we want to maximize function f with respect to vector p,
%      and need also to pass to f data matrices x,y,z.  Then,
%      write the function f so it is called as f(p,x,y,z).  GA will
%      assume that p is a row vector.

defopt=[200,-1,0.12,10,20000,2000,1e-4,50,0];
months = ['Jan';'Feb';'Mar';'Apr';'May';'Jun';...
          'Jul';'Aug';'Sep';'Oct';'Nov';'Dec'];

if nargin>2
   if isempty(options)
        options=defopt;
   end
else
   options=defopt;
end
m=options(1); eta=options(2); gam=options(3);
printcnt=options(4);
maxiter=options(5);
stopiter=options(6); epsln=options(7);
rplcbest=options(8);
vecfun=options(9);

% Use Booker's VCO if eta==-1
vco=(eta<0);  eta=abs(eta);

% Cancel rplcbest if <=0
if rplcbest<=0, rplcbest=maxiter+1; end

K=size(parspace,2);

% Draw initial Generation
G=rand(m,K).*(parspace(2*ones(m,1),:)-parspace(ones(m,1),:))...
       +parspace(ones(m,1),:);
b0rows=size(parspace,1)-2;
if b0rows>0
  G(1:b0rows,:)=parspace(3:b0rows+2,:);
  parspace=parspace([1 2],:);
end

% Initial 'best' holders
inarow=0;
bestfun=-Inf; beta=zeros(1,K);

% Score for each of m vectors
f=zeros(m,1);

% Setup function string for evaluations
paramstr=',p1,p2,p3,p4,p5,p6,p7,p8,p9';
evalstr = [funstr,'(G'];
if ~vecfun
        evalstr=[evalstr, '(i,:)'];
end
if nargin>3, evalstr=[evalstr,paramstr(1:3*(nargin-3))]; end
evalstr = [evalstr, ')'];

% Print header
if printcnt>0
   disp(['Maximization of function ',funstr])
   disp('i      = Current generation')
   disp('best_i = Best function value in generation i')
   disp('best   = Best function value so far')
   disp('miss   = Number of generations since last hit')
   disp('psi    = Proportion of unique genomes in generation')
   disp(sprintf(['\n',blanks(20),'i     best_i        best     miss   psi']))
end

iter=0;  stopcode=0;
oldpsi=1;  % for VCO option
while stopcode==0
   iter=iter+1;
   % Call function for each vector in G
   if vecfun
        f=eval(evalstr);
   else
     for i=1:m
        f(i)=eval(evalstr);
     end
   end
   f0=f;
   [bf0,bx]=max(f);
   bf=max([bf0 bestfun]);
   fgain=(bf-bestfun);
   if fgain>epsln
        inarow=0;
   else
        inarow=inarow+1;
   end
   if fgain>0
        bestfun=bf;
        beta=G(bx(1),:);
   end
   if printcnt>0 & rem(iter,printcnt)==1
        psi=length(unique(G))/m;
        ck=clock;
        ckhr=int2str(ck(4)+100);  ckday=int2str(ck(3)+100);
        ckmin=int2str(ck(5)+100); cksec=int2str(ck(6)+100);
        timestamp=[ckday(2:3),months(ck(2),:),' ',...
           ckhr(2:3),':',ckmin(2:3),':',cksec(2:3),' '];
        disp([timestamp,sprintf('%6.0f %8.5e %8.5e %5.0f %5.3f',...
                [iter bf0 bestfun inarow psi])])
        disp(beta)
        save gabest beta timestamp iter funstr
   end
   % Reproduction
   f=(f-min(f)).^(1+log(iter)/100);
   pcum=cumsum(f)/sum(f);
   r=rand(1,m); r=sum(r(ones(m,1),:)>pcum(:,ones(1,m)))+1;
   G=G(r,:);
   % Crossover
   if vco
        psi=length(unique(G))/m;
        eta=max([0.2 min([1,eta-psi+oldpsi])]);
        oldpsi=psi;
   end   
   y=sum(rand(m/2,1)<eta);
   if y>0
     % choose crossover point
     x=floor(rand(y,1)*(K-1))+1;
     for i=1:y
        tmp=G(i,x(i)+1:K);
        G(i,x(i)+1:K)=G(i+m/2,x(i)+1:K);
        G(i+m/2,x(i)+1:K)=tmp;
     end
   end
   % Mutation
   M=rand(m,K).*(parspace(2*ones(m,1),:)-parspace(ones(m,1),:))...
       +parspace(ones(m,1),:);
   domuta=find(rand(m,K)<gam);
   G(domuta)=M(domuta);
   % Once every rplcbest iterations, re-insert best beta
   if rem(iter,rplcbest)==0
        G(m,:)=beta;
   end
   stopcode=(inarow>stopiter)+2*(iter>maxiter);
end

if printcnt>0
   if stopcode==1
        disp(sprintf('GA: No improvement in %5.0f generations.\n',stopiter))
   else
        disp(sprintf('GA: Maximum number of iterations exceeded.\n'))
   end
end
% end of GA.M
function [gain_t]=Yagi(p)
% This program is used with a getic optimization code.
% It creates an NEC input file given the parameters for a 3 element YAGI
% antenna.  Then, it runs NEC and reads in the parameters(Gain, Impedance, etc) generated by NEC.
%=========Create NEC input file========================================
Fname_nec='Yagi.nec';
FID_nec=fopen(Fname_nec,'wt');
D=0.0085; % diamter of elements in wavelengths
R=D/2;
Lr=p(1); %perimeter of reflector
Ls=p(2); %perimeter of driven element
Ld=p(3); %perimeter of director
Sr=-p(4); %location of reflector
Sd=p(5);  %location of director
%    Geometry input for NEC

rsl = Lr / 4; %reflector single side length: square loop, one side = permiter / 4
ssl = Ls / 4; %driven single side length: square loop, one side = permiter / 4
dsl = Lr / 4; %director single side length: square loop, one side = permiter / 4
num_segments_per_side = 7;

fprintf(FID_nec,strcat('CM UDA-YAGI SQUARE LOOP ANTENNA','\n'));
fprintf(FID_nec,strcat('CE FIle Generated by MatLab','\n'));
fprintf(FID_nec,'GW %3i %3i %8.4f %8.4f %8.4f %8.4f %8.4f %8.4f %8.4f\n',1,num_segments_per_side,Sr,0,-Lr/2,Sr,0,Lr/2,R); %Reflector
fprintf(FID_nec,'GW %3i %3i %8.4f %8.4f %8.4f %8.4f %8.4f %8.4f %8.4f\n',2,num_segments_per_side,0 ,0,-Ls/2, 0,0,Ls/2,R); %Driven Element
fprintf(FID_nec,'GW %3i %3i %8.4f %8.4f %8.4f %8.4f %8.4f %8.4f %8.4f\n',3,num_segments_per_side,Sd,0,-Ld/2,Sd,0,Ld/2,R); %Director
%    Program Control Commands for NEC
fprintf(FID_nec,strcat('GE','\n'));
fprintf(FID_nec,'EX %3i %3i %3i %3i %8.4f %8.4f\n',0,2,4,0,1,0);%Exitation Command wire 2 segment 4
fprintf(FID_nec,'FR %3i %3i %3i %3i %8.4f %8.4f\n',0,1,0,0,2400,0);%set freq to 299.8 MHz so wavelength will be 1m
fprintf(FID_nec,'RP %3i %3i %3i %3i %8.4f %8.4f %8.4f %8.4f\n',0,1,1,1000,90,0,0,0);%calculate gain at boresite
fprintf(FID_nec,strcat('EN','\n'));
fclose(FID_nec);
%=======Create file to pipe to NEC ===================================
FID_input=fopen('input_CMD','wt');
fprintf(FID_input,strcat(Fname_nec,'\n'));
fprintf(FID_input,strcat('NEC.out','\n'));
fclose(FID_input);
%=======Run NEC======================================================
!NEC2Dx500 < input_CMD >tmp;
%=======Read Data form NEC output file===============================
[freq,Z,gain_t,E_theta,E_phi,n_freq_meas,run_time] = nec_read('NEC.out');