Nowadays Our Life is Lifted Up by Social Media Algorithm, here we’ve covered

all concerned content as
Rhythm Of Algorithm’

✐Publish Post
Only On
‘Rhythm Of

  Add All News, Views, Consciences, Etc.
About/From/On   Rhythm Of Algorithm  

  *select/add Category/Tag:  Rhythm Of Algorithm  

What is Algorithm Design? – Computer Science Degree Hub

What is Algorithm Design? - Computer Science Degree Hub An algorithm is a series of instructions, often referred to as ...
View Full Story

Google Algorithm Update 2020: Unconfirmed Update

Google Algorithm Update 2020: Unconfirmed Update What is the latest Google Algorithm Update, is a question that SEOs search the ...
View Full Story


Algorithms AlgorithmsPermanent link to this comic: URL (for hotlinking/embedding):[[Two women and a man are standing around, talking.]] Woman: ...
View Full Story

The Algorithm: Idiom of Modern Science

The Algorithm: Idiom of Modern Science by Bernard Chazelle hen the great Dane of 20th century physics, Niels Bohr, was ...
View Full Story

What is an Algorithm in Computer Science?

What is an Algorithm in Computer Science? An algorithm, is a term used in the field of Computer Science, to ...
View Full Story

*please excuse the google/sponsors ads. Although Ad may show something Awesome sometime for you ! 

Add All News, Views, Consciences, Etc. 
About/From/On Rhythm Of Algorithm

✐Publish Post
Only On
‘Rhythm Of

*select/add Category/Tag:: Rhythm Of Algorithm 


What is Algorithm Design? – Computer Science Degree Hub

An algorithm is a series of instructions, often referred to as a “process,” which is to be followed when solving a particular problem. While technically not restricted by definition, the word is almost invariably associated with computers, since computer-processed algorithms can tackle much larger problems than a human, much more quickly. Since modern computing uses algorithms much more frequently than at any other point in human history, a field has grown up around their design, analysis, and refinement. The field of algorithm design requires a strong mathematical background, with computer science degrees being particularly sought-after qualifications. It offers a growing number of highly compensated career options, as the need for more (as well as more sophisticated) algorithms continues to increase.

Conceptual Design

At their simplest level, algorithms are fundamentally just a set of instructions required to complete a task. The development of algorithms, though they generally weren’t called that, has been a popular habit and a professional pursuit for all of recorded history. Long before the dawn of the modern computer age, people established set routines for how they would go about their daily tasks, often writing down lists of steps to take to accomplish important goals, reducing the risk of forgetting something important. This, essentially, is what an algorithm is. Designers take a similar approach to the development of algorithms for computational purposes: first, they look at a problem. Then, they outline the steps that would be required to resolve it. Finally, they develop a series of mathematical operations to accomplish those steps.

Related reading: 30 Most Affordable Online Bachelor’s Degrees in Computer Science 2017

From Small Tasks to Big Data

A simple task can be solved by an algorithm generated with a few minutes, or at most a morning’s work. The level of complexity runs a long gauntlet, however, arriving at problems so complicated that they have stymied countless mathematicians for years — or even centuries. Modern computer confronts problems at this level in such areas as cyber-security, as well as big data handling — the efficient and thorough sorting of sets of data so large that even a standard computer would be unable to process them in a timely fashion. Examples of big data might include “every article on Wikipedia,” “every indexed and archived webpage going back to 1998,” or “the last six months of online purchases made in America.”

Algorithm Engineering

When the design of new algorithms is applied in practical terms, the related discipline is known as algorithm engineering. The two functions are frequently carried out by the same people, although larger organizations (such as Amazon and Google) employ specialized designers and engineers, given their level of need for new and specialized algorithms. Like the design process, algorithm engineering frequently involves computer science accreditation, with a strong background in mathematics: where they exist as a separate, specialized profession, algorithm engineers take the conceptual ideas of designers and create processes from them that a computer will understand. With the steady advancement of digital technology, dedicated engineers will continue to become more and more common.

For More Information

This is a relatively new field, and the heightened relevance of designing and refining algorithms is very much a recent phenomenon. More information on algorithm design is readily available online, including a variety of free introductory courses and online tutorials.

MCB Love to Mention : )

Content Courtesy →

What is Algorithm Design? – Computer Science Degree Hub
Have A Views ?
Pay A Visit :

MCB-Rhythm Of Algorithm

How useful was this post?

Click on a star to rate it!


Google Algorithm Update 2020: Unconfirmed Update

What is the latest Google Algorithm Update, is a question that SEOs search the most nowadays. The major reason for the “Google Algorithm Update” becoming such a trending keyword is due to the uncertainty caused after the rollout of each update. Google rolls out hundreds of core algorithm updates each year and the search engine giant announces a few that have a far-reaching impact on the SERP.

Each time Google updates its algorithm, it’s moving a step forward in making the search experience easy and more relevant to the users. However, as SEO professionals, we get

recommended to use white hat techniques

Table of Content

Unconfirmed Google Algorithm Update – 16-08-2020

A few days after Google faced the heat due to a bug within its Caffeine Algorithm that caused irrelevant results to appear on SERPs, the search engine giant now seems to have launched a major algorithm update.

According to the data retrieved from algorithm trackers, the new update, which remains unconfirmed till now, has created strong turbulence in search results. This also means that many websites may have noticed significant changes in the organic traffic trend.

This kind of very high SERP fluctuation is usual the result of Google making decisive changes to its algorithm as it does a number of time every year. Usually, these unannounced massive fluctuations are precursors to a looming broad core update.

SERP Fluctuation Detected by Different Algorithm Trackers


SERP Metrics

Rank Ranger




Unconfirmed Google Algorithm Update – 23-06-2020

We know Google has a history of rolling back some of the signals after the rollout of Broad Core Updates. However, these changes are not announced and it’s only through the sensors that we get to know about the SERP fluctuations.

It looks like Google has rolled back a few signals of May Core Update on the 23rd June as we have seen websites hit by the last Broad Core Update getting ranking improvements.

Here is a quick analysis of unconfirmed Google Update that I did along with Senthil Kumar, VP, Stan Ventures.

The sensors are also showing a big spike in ranking fluctuations


Advanced Web Rankings


SEMRush Sensor


Google Starts Rolling Out May 2020 Core Update – 04-05-2020

After two months of recess, the SEO community has to brace for yet another Google Broad Core Algorithm Update. Announcing the roll-out of the latest core update, the Official Search Liaison Twitter handle said, the update has started rolling out across different data centers.

The roll-out, like all other typical broad core algorithm updates, will take a week to complete, and the impact may take some time to reflect on the results.

Free Recovery Consultation

Unlike the daily updates that Google launches, the broad core algorithm update has vast implications as it is notorious for shuffling the organic results, causing fluctuations in the organic rankings of websites. 

However, there is no quick fix if a broad core algorithm update hits you, says Google. The reason for the decline in the organic ranking is not because your website has serious SEO issues, but it is the result of Google finding better results for the search query.

Listen to my discussion with Senthil Kumar, VP, Stan Ventures about the impact of Google May Core Algorithm Update:

Full Transcript 

Senthil: Hey guys, welcome to another episode of SEO on Air. Today I have with me my colleague, Dileep. And this is the first time we’re collaborating on this (podcast). Dileep is our content head and takes care of the Stan Ventures web content. 

Most of you who are listening to this will be aware of the rank for most of the competing keywords when it comes to Google algorithm updates. So it’s the time of the year where Google releases these algorithm updates and you may see a bump in your web traffic. Well, that doesn’t really mean it’s all because of the algorithm update, as it could as well be seasonal traffic. So, we are here to discuss what this algorithm update is about and what it can cause.

So, I and Dileep came up with the podcast idea just five minutes back (we were having a conversation about what’s happening out there). It’s a very ad hoc, impromptu podcast, I would say. Anyway, Dileep, welcome to the show, man. 

Dileep: Thank you. Thank you. 

Senthil: The topic for today’s discussion is the Google algorithm update. The core algorithm update is very special because it’s a rock-solid one. They (Google) waited for two months and then released it. 

Dileep: Yes. So now they have a definite pattern for launching the algorithm updates, at least the core one. The last one was launched in March, the other in Jan. So definitely we can expect a broad core algorithm update during the period of every two-months. 

Within the SEO community, many SEO forums were discussing Google postponing the update because of COVID. However, I think Google had its own plans.

So this time, if you check the fluctuations after the May broad core update, they are huge. There was a big launch last year, the BERT. It was said that almost 10% of the global search will have its impact, but you can’t see much fluctuation.

However, compared to the last three or four Google algorithm updates, this one has high fluctuation. This means some websites, which were ranking on top, have definitely felt the heat. Some well-maintained websites may have improved their rankings, but a handful of websites that were ranking on the first page or the featured snippet have definitely felt an impact of this update.

Senthil: I did notice that because I was tracking a few high-value keywords last week in an insurance article for a prospective client. We were doing an SEO audit and trying to figure out what they were ranking for. When talks began, they were in the sixth position, and after we submitted the proposal, they were in the third position. They were pretty happy about it.

But one thing that I noticed was that earlier, the first and second position was dominated by really good brands in the industry. The prospective client was in the third position while other data aggregators like Policy Bazaar were ranking in the fifth and sixth places. 

After this update, these big brand websites have gone down the ranking scale and the data aggregating sites with some really good content have climbed up the ladder. It’s a weakness because, instead of tracking the services pages of websites for keywords related to money, Google is choosing to focus on sites with huge chunks of detailed content. 

Dileep: I think that’s pretty good because Google is tracking for the users. So, for example, the guy who lost his ranking for a money-related keyword may rank on his features page for the same keyword. However, if a user clicks on the site and finds the information useless, he/she will definitely click back to the search page and find the right site for their needs. So this is basically Google collecting information from users over time, using tactics like this where the user closes one site and opens another, and letting them know what they want ranking on SERPs.

That’s why there is always a gap of two to three months when they launch the Broad core update. If you check on a daily basis, Google launches two to three updates every now and then. Last year alone, there were almost 900,000 updates launched.

Coming back to the money keyword, we have to understand that the user typing the keyword is not considering it as one. He wants something else, and that’s why the blogs are ranking now. 

Senthil: So, let me share my screen and do a quick check. I think you’ve already added a detailed analysis on this, so let’s do a quick walkthrough. For people who are listening to this, we’ll be adding this podcast on the blog post as well for your perusal. 

Dileep: The day the update was launched, there were no major fluctuations in rankings. What usually happens is that after Google rolls out the update, its effects start showing over the next few days. 

It was on May 4th that Danny Sullivan, through his official Google account, announced that they were going to launch the new update. But when I checked the Moz and SemRush sensors on that particular day, both these tools were not showing much fluctuation.

It was on May 4th that they announced the update, but the fluctuation started from May 5th.  

Senthil: Yeah. I can see a big tower here. 

Dileep: Actually, there is a bigger update, which you can see if you Google SEMRush sensor, and it has caused huge fluctuation in rankings. 

Usually, the algorithm updates are launched across multiple data centers. On May 5th, when it was just rolled out, the SEMRush sensor was at 7.7, and you can see a huge spike on May 6th.

Senthil: Yeah, just shot up like anything. 

Dileep: So this huge spike in rank fluctuations is something that I haven’t seen for at least one and a half years. It can be a good indicator that many websites have been hit by the update. You can also see all the categories that have got hit. 

Senthil: So it’s clearly not a regular update. When it comes to Google core algorithm updates, it’s across industries. It’s not like an EAT update or YMYL update where only the money websites, insurance, healthcare, and other similar websites were affected.

But if you look at a broad core algorithm update, you can easily figure out that by looking at this itself, you can look at all the categories. Almost everybody has taken a hit, right? This gives you a sense that this is definitely a broad core algorithm update.

So the local pack, obviously, because of the COVID and other reasons, might have taken a hit, but look at this knowledge panel, the growth by 32.2%. 

Dileep: And also, if you check for some individual niches, you can see there’s a huge drop in featured snippet positions during that particular day.

But it has picked up after they rolled out the update completely. So this means that they have made some fresh changes to what they are displaying on the featured snippet data. So if you’ll just select one of the top niches in that, go to finance or even health.

Senthil: Yeah. Let’s look at health care because that’s the department where last year, the update hit them. The local pack has taken a hit by 1.2%

Dileep: So you can see how the featured snippet has been packed at this thing climb, which is showing, that is how the featured snippets, so that day when they launched that date, the featured snippet went down. Like anything. So that means they really did some refresh on the feature snippet area, and now we can see it is normalizing. 

One thing about featured snippets is you can see that they are getting the user’s inputs. They have a feedback area for all the features in the search results displayed. They get feedback from the users and they use it whenever they do a broad core algorithm update. That’s what I have seen while analyzing the past few updates that they have rolled out.

A feature snippet position is something that no website owner can say, “I am ranking on the featured snippet and I will be ranking there for the rest of the year.” You can’t predict it because this is something that Google decides and based on many factors, which they obviously have not revealed. 

But yeah, there are techniques in which you can optimize your content to rank on featured snippets, which most of the website owners are also trying to do.

So the position of the featured snippet is something that is kind of volatile when an algorithm update is launched.

Senthil: Let’s look at mobile as well. The data might be skewed anyways because COVID 19 is there and there will be naturally a decrease.

Dileep: The problem is that the user’s mentality for search has changed over the last two months. There are a lot of daily updates that they are rolling out. So, based on that, they are tweaking the search results on a daily basis.

There are chances that some of the niches, before even launching this particular update, were facing some kind of fluctuation in their ranking because the user intent kind of changed after the COVID incident.  

Senthil: Absolutely. So I think the best thing for whoever has lost the rankings, I know some, I mean, there are always winners and losers and it changes.

So if you’ve won, congratulations. Enjoy the cake and enjoy the victory. But the people who have been building so far and have lost the rankings, I don’t think we should panic. I think there’s not much that you can do about it. Probably, you can get back to the basics. Try to check the spammy links, try to look at the person that outranks you. Look at that page and look at the content styling.

Dileep, what I feel is when people are looking at it, they miss the point that “what is the intent?” Why did Google outrank it? For example, at the beginning of the conversation, I told you. This was a money page where it is trying to sell an insurance product.

But the website that has outranked this website has a page which talks about top 10 insurance products that you can buy. And it has a really good comparison. So I think the intent shift is there. Instead of thinking that you should add all your content on the service page, maybe you can rethink the strategy, look at the search results and figure out what Google prefers to show more.

Maybe if they are gonna show more blog pages, then maybe it’s time for you to recheck if any blog on your website has the capacity to go and try to do that. Because if you’re still stuck with the money page, then probably, it might work or it might not.

So it’s always better to have your link man in place so that if anything happens and you know, you still have a lifeline to save you. 

Dileep: I think one of the reasons why people try to put these kinds of money keywords for the service pages itself is because the number of conversions that happen from a blog page probably might be a little less than the service page.

So that is probably the thought process. But here, the intent is different than the user who is coming from a blog based content will never buy a product from your website. So he’s in the top of the funnel probably, like we say in marketing. So he’s just exploring the options. Even if you are ranking for the keyword on the featured snippet, I don’t think the person will buy the product from you.

He will again go back, he will do his research and probably, later on in the stage of his marketing funnel, he might come back again. But again, at the initial stage, he will not. So ideally, if that is the case, then I would prefer having a blog rather than my service page. 

Senthil: Absolutely. You’re right. Actually, the money keyword is still there. Money keywords, like for example buy with a particular name, the ranking still hasn’t changed. But the ranking has changed for the top of the funnel informative keywords. So I think the change is now Google itself is trying to realign themselves into saying hey if you’re an insurance company, don’t go after the keyword insurance. That’s all. That’s what Google is trying to say. 

If it’s like insurance, it means the intent of the user is to look at multiple comparisons. So even if you’re a big brand so far, you might have won that cushion layer from Google because Google thinks the best brands should always rank, but now I think that is getting decimated with this. 

So basically, they want to keep the users happy. That’s how Google earns. So if the user is not happy, then they are losing their own money. So they want to make users as happy as possible.

They have all the ways to understand whether the user is happy or not. They can check the time spent on the page, the bounce rate; these are all signals that Google already has to measure whether the user is happy or not. So there is no way we can trick it. I think the thought process should be to align ourselves with the intent of the user.

Just forget about Google, let’s just forget about Google for the time being, and just try to focus on your user. If your user is coming for something, which is probably a blog post, you provide him that rather than putting your service page upfront.

Senthil: So hopefully, yes. Guys, that summarizes the discussion. So, if you are affected by this algorithm update, please don’t panic. Just look at who has outranked you. Look at what intent that website satisfies and ensure that you’re able to match up to that expectation. And for God’s sake, don’t go after the single keywords. Go after the keywords that solve the intent.

That is exactly what Google is trying to do. Don’t try to be Wikipedia. Try to be a brand. You’re there to make money, so whatever pages that make you money, try to focus on the page and don’t go after the tons of other things out there.

So that summarizes this discussion. Thank you so much, everyone. So Dileep, you have anything else to add before I wind up this podcast? 

Dileep: One more thing, I think there are a lot of people who are in panic mode because I am seeing a lot of comments on our Google algorithm update page, telling that they are almost on the verge of closing down their business because of this update. So what I would like to tell them is this is something that is probably a phase in your digital marketing platform. So basically, it can turn around within the next algorithm update. 

So what usually happens is once the algorithm update is launched, then after two or three days, Google would analyze whether the update has hit some genuine websites. So what they used to do is they used to roll back whatever negative things were there within the update.

Senthil: Like a refresh. They usually release it after a couple of weeks. So your job is not lost yet, guys. So don’t worry. There will definitely be a refresh coming up and the good sites that got knocked down by this, they will get their thing back. But that also depends on who outranked you. So if that outranked guy is better than you, then that is a very little chance. 

Dileep: Yeah, but again, if they keep on trying to improve upon their content, I’m pretty sure they’ll be able to rank when the next broad core algorithm is launched. So Google itself says a majority of the sites who are hit by a broad core algorithm update, they can only recover when another broad core algorithm update is launched.

So basically, just wait for another two months. Do a good job, continue producing good quality content, and then you’ll have a sure shot of ranking again. 

Senthil: Absolutely. Great. Thanks, man. It was a good conversation. So anyone having any questions around this, please feel free to leave your comments.

We’ll do our best to get back to you. Dileep will answer whatever technical things that you guys have. Feel free to give a shout out and hopefully, we get through this. Already COVID on one side and the core algorithm on the other side. Tough times actually make the best businesses, so if you’re able to survive the next two months, you can survive at least for a while.

So all the best guys! Feel free to share your comments, and don’t forget to like, share, and subscribe. I’m just trying to become like the YouTubers. Anyway, thank you so much. Have a nice day.

The only way for websites that are impacted by the broad core update is to improve the quality of the content and its authority.

We did a quick analysis of the popular algorithm trackers and the impact of the Broad Core Update seems significant. Here are a few screenshots.




searp metric rank fluctuation may

SEMRush Sensor

semrush sensor may 2020 update


rankranger algorithm update may 2020


accuranker may 2020 google update

We will keep you posted with all insights about the May 2020 Core Update when the SERP dance settles down.

Unusual SERP Changes During Unusual Times (March – April)

The weeks following the spread of COVID-19 have been quite tumultuous with almost all Google Algorithm Update checkers displaying spike in algorithm activity. We are not sure whether this is due to the change in global trends after the global pandemic or because of some incremental updates that Google has rolled out.

With many local businesses now temporarily closed, the local search results are witnessing many changes. The Google Map Pack is currently subjected to high-volatility due to the COVID-19 circumstances.

However, we believe that this is a passing phase, and things will normalize once the threat of COVID-19 is over. We are all looking up to that day and hope you and your family are safe during these difficult times.

Google has made several steps to ensure that no fake news goes out to the public through their SERP, and this has resulted in many Health and Wellness websites seeing massive fluctuations. If you check the SERP page for COVID-19, it’s evident that Google doesn’t want to take a chance by listing sites other than the high authority ones.

Since Google has not come up with any announcements regarding core update, these changes have to be deciphered as the result of the change in search behavior of the users.

Going by the algorithm trackers, there is a high fluctuation on almost every day. What we are now going through is unforeseen circumstances, and the same is happening in the search landscape.

Such a scenario has never occurred in the internet era, and the current changes and fluctuations have to be put into in-depth analysis before we can conclude on the factors that are making such violent SERP changes.

If you are seeing massive traffic drop or keyword positions decreasing, I suggest you not to make a series of changes at this point in time. Keep doing the great work and wait until the good times are back

SERP Fluctuations Through the Eyes of Algorithm Trackers



Advanced Web Ranking









SEMRush Sensor

SEMRush Sensor April



Unconfirmed Algorithm Update – February 8 – 13, 2020

We understand that there are thousands of algorithm tweaks that happen every year. However, the one that has occurred during the last one week seems to be as significant as a broad core algorithm update!

Announcing important algorithm updates before its rollout is a habit that Google has been following since last two years. Thanks to Danny Sullivan, the SearchLiaison twitter handle has been doing a great job, keeping webmasters informed about impending updates.

However, it seems like an update with as much or even more impact than a Broad Core Algorithm Update was rolled out last week – unnoticed.


Unconfimred Algorithm Update Feb Algoroo


Unconfimred Algorithm Update Feb SERPMetrics


Unconfimred Algorithm Update Feb

SEMRush Sensor

Unconfimred Algorithm Update semrush feb


Unconfirmed Algorithm Update Rank Ranger Feb

I asked Danny whether there was a significant update. However, the answer came out to be the same template that Google executives provide when they are hesitant to reveal further details of the update.

However, I’d take this reply from Danny as an “Yes” to my question. We will be doing an in-depth analysis of the websites impacted by this update in the coming days.

Featured Snippet Algorithm Update – January 23, 2020

Google officials announced that it has rolled out an algorithm update that will restrict URLs that is shown in the featured snippet to appear again within the first ten organic search results.

Google’s Danny Sullivan, while replying to a question on twitter confirmed that a webpage that gets featured in the snippet position AKA #0 position, will not show up again in the listing.

According to Danny, Google’s Public Searchliaison, the new tweak in the algorithm will ensure the Search Results page is not cluttered, and only relevant information gets displayed.

He also confirmed that starting today, the featured snippet will be counted as one of the ten listings on the SERP.

“If a web page listing is elevated into the featured snippet position, we no longer repeat the listing in the search results. This declutters the results & helps users locate relevant information more easily. Featured snippets count as one of the ten web page listings we show,” tweeted Danny.

Danny also confirmed that the new update has been rolled out 100% and is now effective globally.

Interestingly, I had predicted the same to happen last week and had asked Google’s John Mueller whether Google is testing the new featured snippet update. John’s reply to my tweet was convincing enough to believe that they were indeed testing the new update, which is now live.

Google January 2020 Broad Core Algorithm Update

As announced on January 13th Google rolled out the first Algorithm Update of 2020. The significance of this algorithm update increases as the search engine giant has confirmed that the new Google Update is a broad core algorithm update.

The roll out was pre-announced through the Google Search Liaison official twitter handle. The tweet read, “Later today, we are releasing a broad core algorithm update, as we do several times per year. It is called the January 2020 Core Update. Our guidance about such updates remains as we’ve covered before. “

Google rolls out close to a thousand algorithm updates each year. However, it’s only a few times in a year that a broad core algorithm update is released. The significance of the broad core algorithm is closely related to the impact it has on the websites.

Unlike the core update, which usually goes unnoticed due to the fewer tremors it causes, the Broad Core Algorithm Update makes significant SERP fluctuations, which often results in giving webmasters panic attack.

All this said Google has been telling webmasters that the only way to recover from the impact of a Broad Core Algorithm update is by building great content. It also says that these updates are focused on improving the search quality by giving users better search results.

During the initial days after the Algorithm Update, various Algorithm Update trackers started showing huge fluctuations. This is an indication that a lot of websites are in fact seeing increase or decrease in their organic ranking positions.


Accuranker Image of SERp Fluctuation

AlgorooAlgroo Image of SERP Fluctuation

Moz Cast

Moz Cast Image of SERP Fluctuations

Rank Ranger

Rank Ranger Image of SERP Fluctuation


Semrush Sensor image of SERP fluctuations

SERP Metrics

SERP Metrics Image of SERP Fluctuation

What is a Google Broad Core Update?

A broad core update is an algorithm update that can impact the search visibility of a large number of websites. Each time an update is rolled out, Google reconsiders that SERP Ranking of websites based on expertise, authoritativeness, trustworthiness (E-A-T).

  • Unlike the daily core update, the broad core update comes with far-reaching impact.
  • Fluctuation in ranking positions can be detected for search queries globally.
  • The update improves contextual results for search queries.
  • There is no fix for websites that were previously hurt by Google update.
  • The only fix is to improve the content quality.
  • Focus more on Expertise, Authority and Trustworthiness (E.A.T)

To know more about what is a broad core algorithm update, check our in-depth article on the same. We will provide you in and out of the new update in a short while. Please keep a tab on this blog. 

Future Algorithms will be of Google, by Google, and for Google

As we all know, the Google organic search is on a self-induced slow-poison! How many of you remember the old Google’s search results page, where all the organic search results were on the left and minimal ads on the right? Don’t bother, remembering isn’t going to make it come back!

If you’ve been using Google for the last two decades, then the transformation of Google Search may have amazed you. If you don’t think so, just compare these two screenshots of Google SERP from 2005 and 2019.



Google started making major changes to the algorithm, starting with the 2012 Penguin update. During each Google Algorithm Update, webmasters focus on factors such as building links, improving the content, or technical SEO aspects.

Even though these factors play a predominant role in the ranking of websites on Google SERP, an all too important factor is often overlooked!

There has been a sea of change in the way Google displays its search results, especially with the UI/UX. This has impacted websites more drastically than any other algorithm update that has been launched to date.

In the above screenshot, the first fold of the entire SERP is taken over by Google features. The top result is a Google Ads, the one next to it is the map pack, and on the right, you have Google Shopping Ads. 

The ads and other Google-owned features that occupied less than 20% of the first fold of the SERP Page now take up 80% of it. According to our CTR heatmap, 80% of users tend to click on websites that are listed within the first fold of a search engine results page. 

This is an alarming number as ranking on top of Google SERP can no longer guarantee you higher CTR because Google is keen to drive traffic to its own entities, especially ads. 

Since this is a factor that webmasters have very little control over, the survival of websites in 2020 and beyond will depend on how they strategize their SEO efforts to understand the future course of the search engine giant.

When talking about how Google Algorithm Updates might work in 2020, it’s impossible to skip two trends – the increasing number of mobile and voice searches. The whole mobile-friendly update of April 2015 was not a farce, but a leap ahead by the search engine giant that would eventually make it a self-sustained entity. 

We will discuss voice and mobile search in detail a bit after as they require a lot of focus.

Algorithms will Transform Google as a Content Curator

If you dig into the history of search engines a little deeper, you’d know that Yahoo started as a web directory that required entering details manually. Of course, this wasn’t a scalable model. On the other hand, Google’s founders decided to build algorithms that can fetch the data and store it for the future. However, Google later realized that their model can be turned into one of the most ROI generating one. 

Google of 2019 is both a content curator and a search engine. However, moving forward, Google will be less of a search engine and more of a content curator. Still wondering how Google is curating content to its users? Here are a few examples: 

Just Google “Hepatitis B” and you will find a knowledge graph on the right that is autogenerated by Google. 

This particular information about Hepatitis B is generated by Google’s Learning Algorithm by stitching together data from authority websites. According to Google, this medical information is collected from high-quality websites, medical professionals, and search results. 

With Google being the repository of important web pages that users value, you can expect more such self-curated content in Google search. It’s interesting that even the creative used in such results are created by Google. Another example of Google doing self-attribution.

Here is Another Example of Google Curating Content

A Google search for “how tall is the Eiffel tower?” will display a knowledge card with the exact answer to the user’s question, without any attribution. 

But further scrutiny into the SERP, especially the right-side Knowledge Graph, will help you find out how Google came up with the answer. 

This is an indication of how critical the structured data would be in 2020 and the years to follow. However, structured data is a double-edged sword as the Google of 2020 may use it on SERP (like in this case) with zero attribution. 

Google Algorithms will Stick to Its Philosophy, But with a Greedy Eye 

If you think Google is fair and isn’t greedy, here is something that you may have missed in the earlier screenshot. 

The way Google is moving ahead seems like the algorithms will scatter ads clandestinely within the SERP to direct more traffic to promoted/sponsored content. 

Basically, Google is taking data from your website, repurposing it for the knowledge graph, and getting monetary benefits, which ultimately do not reach you. However, taking into account Google’s current position, this has to be seen as a desperate move!

Google was petrified by the decrease in the click-through-rates received for results on mobile devices and did everything possible to get their million-dollar advert revenue back on track. And one such step was what we now call Mobilegiddon. The mobile-first indexing approach introduced in 2015 was a silent threat to websites asking them to either toe the line or be ready to be pushed to the graveyard (second and following pages of Google search.)

The perk that Google earned from this strategy is that it saved the amount of time and effort to update its algorithm to crawl and render both mobile and desktop versions of websites. So, with mobile-first indexing in place, Google decided to use mobile-first UI/UX. 

Here is how

(Screenshot of the Mobile-first design of Google)

E.A.T (Expertise, Authority, Trust)

Google has been quite vocal about maintaining the E A T standard. Even though this is now more focused on websites in the YMYL category websites, moving forward, the Algorithms will become smarter and start using it for all niche. 

Google’s Algorithms will keep on ensuring websites that feature on the first page of its search provide the best accurate information for the users. Implementing this across the web will ensure that users only get the best results on SERP. 

Online reviews, social mentions, brand mentions, and general sentiments across the web will play a vital role in ranking websites on Google. 

No-follow and UGC Links Will Pass Link Juice

It was a while since Google made any major announcement with regard to links. However, 2019 saw the search giant adding two new attributes to the links in addition to the do-follow and no-follow. 

UGC and Sponsored are the two new link attributes that will soon become part of the Google ranking factors. A majority of the sites use the no-follow as the default attribute for external links. This is one reason why Google introduced the two new link attributes. 

Moving forward, no-follow and UGC links will start passing link juice. Even though the importance of these links will not be as significant as the do-follow, they will definitely pay a vital role in the future. 

When it comes to the promoted links, Google will ensure that its algorithm completely ignores passing the link juice. Google has asked the webmasters to start using these attributes as early as possible as they are scheduled to become part of the Google Ranking Factors in March 2020. 

BERT Update Live for 70 Languages – December 9, 2019

Google has officially announced the roll out of BERT (Bidirectional Encoder Representations from Transformers) in Google Search across 70 languages.

Earlier in October, Google rolled out BERT, touting it as the latest and most reliable language processing algorithm. The BERT has its origin from the Transformers project undertaken by Google engineers.

During the announcement of BERT Algorithm Update, Google confirmed that its new language processing algorithm will try to understand words in relation to all the other words in a query, rather than one-by-one in order. This gives more impetus to the intent and context of the search query and delivers results that the user seeks.

The Google SearchLiaison official tweet says, “BERT, our new way for Google Search to better understand language, is now rolling out to over 70 languages worldwide. It initially launched in Oct. for US English.”

Here is the list of languages that uses the BERT natural language processing algorithm to display Google search results:

Afrikaans, Albanian, Amharic, Arabic, Armenian, Azeri, Basque, Belarusian, Bulgarian, Catalan, Chinese (Simplified & Taiwan), Croatian, Czech, Danish, Dutch, English, Estonian, Farsi, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latvian, Lithuanian, Macedonian Malay (Brunei Darussalam & Malaysia), Malayalam, Maltese, Marathi, Mongolian, Nepali, Norwegian, Polish, Portuguese, Punjabi, Romanian, Russian, Serbian, Sinhalese, Slovak, Slovenian, Swahili, Swedish, Tagalog, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, and Spanish.

Difference Between BERT and Neural Matching Algorithm

The recent announcement about the rollout of the November Local Search Algorithm Update by Google has opened up a pandora of questions in the webmaster’s community. The whole hoo-ha about the update stems from the term “neural matching.”

It was only in September that Google announced the rollout of its BERT update, which is said to impact 10% of the search results. With another language processing algorithm update now put in place, the webmaster community is confused as to what difference both these updates will make on the SERP results.

Google has patented many language processing algorithms. The recent BERT and the Neural Matching are just two among them. The Neural Matching Algorithm was part of the search results since 2018. However, this has been upgraded with the BERT update in 2019 September.

As of now, Google has not confirmed whether the Neural Matching Algorithm was replaced by the BERT or if they are working in tandem. But the factors that each of these algorithms use to rank websites are different.

The BERT Algorithm is the derivation from Google’s ambitions project Transformers – a novel neural network architecture developed by Google engineers. The BERT tries to decode the relatedness and context of the search terms through a process of masking. It tries to find the relation of each word by taking into consideration the predictions given by the masked search terms.

Talking about Neural Matching, the algorithm is closely related to a research that Google did on fetching highly relevant documents on the web. The idea here is to primarily understand how words are related to concepts.

The Neural Matching algorithm uses a super-synonym system to understand what the user meant by typing in the search query. This enables users to get highly relevant local search results even if the exact terms doesn’t appear in the search query.

When it comes to local business owners, the Neural Matching algorithm will better rank businesses even though their business name or description aren’t optimized based on the user queries. Neural Matching algorithm in Local Search results will be a boon to businesses as the primary ranking factor will be the relatedness of the words and concept.

Basically, the BERT and Neural Matching Algorithms have different functional procedures and are used in different verticals of Google. However, both these algorithms are trained to fulfill Google’s core philosophy – to make the search results highly relevant.

Local Search Algorithm Update – November 2019

Google has confirmed that the fluctuations in the organic search that were reported throughout November were the result of it rolling out the Nov. 2019 Local Search Update – the official name coined by Google. There was a lot of discussion about possible algorithm update during the first week of November and the last week. However, Google failed to comment on this until December 2nd.
The official announcement about the update came via a tweet by Danny Sullivan through the official Google SearchLiaison twitter handle. The tweet read:

Google has also confirmed that the roll-out, which started earlier during last month, has been completed. However, Google has also said that webmasters need not make any changes to the site as this local algorithm update is all about improving the relevance of the search based on the user intent.
The search engine giant has also confirmed that the local search algorithm update has a worldwide impact across all languages.

The new update will help users find the most relevant results for local search that matches their search intent. However, Google has been using this for displaying search results since 2018, primarily to understand how words are related to concepts.

The new algorithm will now understand the concept behind the search by understanding how the words in the search query are closely related to each other. Google says it has a massive directory of synonyms that help the algorithm to do that neural matching.

Starting November 2019, Google will use its AI-based Neural matching system to rank businesses in Local Search results. Until recently, Google was using the exact words found on a business name or description to rank websites on local search.

Unconfirmed Google Algorithm Update – November 27, 2019

There is a major tremor in some algorithm trackers and this could be an indication of another Core Update that could be as significant as the one rolled out on November 8, 2019. There are a few algorithm trackers that have picked up the heat and some are just showing a little spike as of 27th. We will keep a close tab on how this new Google Update will impact the ranking fluctuations.

Moz Weather

November Google Algorithm Update - Moz Weather

Rank Ranger

November Google Algorithm Update - Rank Ranger

Unconfirmed Google Algorithm Update – November 8, 2019

There is a lot of chatter in the SEO arena about a major shift in rankings of websites during the second week of November. However, there is no official confirmation about this from Google, which means it could be a significant Core Update that Google has confirmed happens hundreds of times a year.

The chatters were more focused on websites that came under categories such as recipes, travel, and web design. A closer look into some of these sites revealed that there were no major on-page issues. That said, a deeper link analysis gave us a fair bit of idea about the pertinent question, “WHY US?”

Recipe sites, Travel Blogs, and Web Design companies get a lot of footer links, and most of the time, they are out of context. This, according to the Google link scheme document, is a spammy practice. According to Google, “widely distributed links in the footers or templates of various sites” will be counted as unnatural links. This may have played spoilsport, resulting in a drop in rankings.

After the chatters online, Google came up with an official confirmation via a tweet on the SearchLiason Twitter handle, stating that there have not been any broad updates in the past weeks. However, the tweet once again reiterates that there are several updates that happen on a regular basis.

In the Twitter thread, Google also gave examples of the type of algorithm updates that will have a far-reaching impact on search and how the search engine giant informs webmasters prior to the launch of such updates to ensure that they are prepared.


Only a few Algorithm trackers have registered the impact:

Advanced Web Analytics

November Google Algorithm - Advanced Web Analytics Algorithm Tracker

MOZ Weather

November Google Algorithm - Moz Weather Algorithm Tracker


November Google Algorithm - Rank Ranger Algorithm Tracker

SEMRush Sensor

November Google Algorithm - SEMrush Algorithm Tracker


November Google Algorithm - SERPMetrics Algorithm Tracker

Google BERT Algorithm Update – October 2019

It’s been close to five years since Google announced anything as significant as the BERT Update. The last time an update of this magnitude was launched was back in 2016 when the RankBrain algorithm was rolled out.

According to the official announcement of Google, the new BERT update will impact 10% of overall search results, across all languages. The statement says that BERT is the most significant leap forward in the past five years, and one of the biggest leaps ahead in the history of search.

With so much emphasis given to the latest Google Algorithm Update – BERT, it’s most likely going into the SEO history books along with its predecessors, Penguin, Panda, Hummingbird, and RankBrian. The update will affect 1 out of 10 organic search results on Google search, with a major impact on Search Snippets, aka Featured Snippets.

Bidirectional Encoder Representations from Transformers, codenamed BERT, is a machine learning advancement made by Google involving it’s Artificial Intelligence innovation efforts. BERT model processes words in relation to all the other words in a sentence, rather than one-by-one in order. This gives more impetus to the intent and context of the search query and delivers results that the user seeks.

The announcement about the BERT update was made through the official Twitter handle of the Google SearchLaison. The tweet read, “Meet BERT, a new way for Google Search to better understand language and improve our search results. It’s now being used in the US in English, helping with one out of every ten searches. It will come to more counties and languages in the future.”

The new BERT update makes Google one step closer to achieving perfection in understanding the natural language. This also means that the voice search results will see significant improvement.

To know more about Google’s BERT Update, read our extensive coverage on its origin, concept, and impact on Google search results.

September 2019 Core Algorithm Update Starts Rolling Out

Google has confirmed that the rollout of the September 2019 Core Update has officially begun. The announcement was made via its SearchLaison Twitter handle. The tweet read: “The September 2019 Core Update is now live and will be rolling out across our various data centers over the coming days.”

Unlike the other Broad Core Updates launched by Google, the September 2019 Core Update didn’t have a massive impact on websites. However, the algorithm trackers registered fluctuations in SERP.


September Google Algorithm - Moz Weather Algorithm Tracker


September Google Algorithm - SERPMetrics Algorithm Tracker


September Google Algorithm - Algoroo Algorithm Tracker


September Google Algorithm - Accuranker Algorithm Tracker


September Google Algorithm - RankRanger Algorithm Tracker

SEMRush Sensor

September Google Algorithm - SEMRush Sensor Algorithm Tracker

September 2019 Core Algorithm Update Pre-announcement

Google has once again confirmed, via its SearchLaison official Twitter handle, that a new Broad Core Algorithm update will be rolling out later on Tuesday. This is the second time that Google is pre-announcing the roll out of an algorithm update. Last time they did it was before the roll out of the June 2019 Core Update.

The June 2019 Core Update had a major impact on websites that failed to implement the E.A.T Guidelines. However, going by the current pattern, the new update will have a far-reaching impact on websites that fail to provide Google with quality signals. In addition to this, the new update is being rolled out after Google made three big announcements in the last week – New Nofollow Link Update, Google Reviews and the Key Moments in Videos.

We will keep you posted on how the new broad Core Update impacts the SERP appearance of websites.

Google Search Reviews Updated – September 16, 2019

It has been almost three months since Google came up with an official Algorithm Update announcement. The last time that the search engine giant issued a public statement was on June 4, 2019, when it rolled out the Diversity Update to reduce the number of results from the same sites on the first page of Google search.

However, on September 16, the official Google Webmaster Twitter account announced that a new algorithm is now part of the crawling and indexing process of review snippets/rich results. According to the tweet, the new update will make significant changes in the way Google Search Review snippets are displayed.

Here is what the official Google announcement says about the update:

“Today, we’re introducing an algorithmic update to review snippets to ease implementation: – Clear set of schema types for review snippets – Self-serving reviews aren’t allowed – Name of the thing you’re reviewing is required.”

According to Google, the Review rich results have been helping users find the best businesses/services. Unfortunately, there has been a lot of misuse of the reviews as there have been a few updates about it from the time Google implemented it. The impact of Google Search Reviews is becoming more and more felt in recent times.

The official blog announcing the roll out of the new Google Search Review algorithm update says it will help webmasters across the word to better optimize their websites for Google Search Reviews. Google has introduced 17 standard schemas for webmasters so that invalid or misleading implementations can be curbed.

Before the update, webmasters could add Google Search Reviews to any web page using the review markup.  However, Google identified that some of the web pages that displayed review snippets did not add value to the users. A few sites used the review schema to make them stand out from the rest of the competitors.

Putting an end to the misuse, Google has limited the review schema types for 17 niches! Starting today, Google Search Reviews will be displayed only for websites that fall under the 17 types and their respective subtypes.

List of Review Schema Types Supported for Google Search Reviews

Self-serving reviews aren’t allowed for LocalBusiness and Organization

One of the biggest hiccups faced by Google in displaying genuine reviews was entities adding reviews by themselves via third-party widgets and markup code. Starting today, Google has stopped supporting Google Search Reviews for the schema types LocalBusiness and Organization (and their subtypes) that use third-party widgets and markup code.

Add the name of the item that’s being reviewed

The new Google Search Reviews Algorithm Update mandates the name property to be part of the schema. This will make it mandatory for businesses to add the name of the item being reviewed. This will give a more meaningful review experience for users, says Google.

Google Diversity Update Roll Out – June 4, 2019

Just a few days after the incremental June 2019 Core Update, Google officially confirmed that another update is now part of its Algorithm.

The new Diversity Update is intended to curb multiple search results from the same website from appearing on the first page of Google search. Our first impression of this new tweak is that the impact of it was pretty minor.

But discussions are happening in forums about how the update will impact branded queries, which may require Google to list several pages from the same site.

The announcement about the roll-out was made through the official Twitter handle of Google Search Liaison. The tweet read, “Have you ever done a search and gotten many listings all from the same site in the top results?

We’ve heard your feedback about this and wanting more variety. A new change now launching in Google Search is designed to provide more site diversity in our results.”

“This site diversity change means that you usually won’t see more than two listings from the same site in our top results.

However, we may still show more than two in cases where our systems determine it’s especially relevant to do so for a particular search,” reads the official statement from Google.

One of the major changes to expect after the diversity update is with regards to the sub-domains. Google has categorically stated that sub-domains will now be treated as part of the root domain. This will ensure that only one search result appear per domain.

Here is what Google says, “Site diversity will generally treat sub-domains as part of a root domain. IE: listings from sub-domains and the root domain will all be considered from the same single site. However, sub-domains are treated as separate sites for diversity purposes when deemed relevant to do so.”

June 2019 Core Algorithm Update Roll Out

As warned, the June 2019 core update is slowly being rolled out from Google’s data centers that are located in different countries. The announcement about the roll out was made from the same Google SearchLiaison Twitter account that made the pre-announcement.

The algorithm trackers have started detecting a spike in their graph. This indicates that the impact of the latest broad core algorithm update, which has been officially named June 2019 core update, is starting to affect SERP rankings.

Since Google has updated its Quality Rater Guidelines a few days back with more emphasis on ranking quality websites on the search, the latest update may be a quality patch for the search results page.

We will give you a detailed stat of the impact of the algorithm update on SERP as soon as we get the data from the algorithm trackers. Also, our detailed analysis of the websites hit by the update and the possible way to recover will follow.

June 2019 Core Algorithm Update Preannouncement

Google Search Liaison has officially announced that the search engine giant will roll out an important Algorithm Update on June 3rd.

The latest Google algorithm update, which will be a Broad Core Algorithm Update like the one released in March, will officially be called the June 2019 Core Update.

It is the first time that Google is pre-announcing the launch of an Algorithm update.

Here is the official Twitter announcement:

Unofficial Google Algorithm Update of March 27th

Yes, you heard it right. Google has made some significant changes to the algorithm during the final few days of the month of March.

We have seen Google making tweaks after the roll-out of Broad Core Algorithm updates, but the one we are witnessing now is huge, and some algorithm sensors have detected more significant ranking fluctuation than the one that happened on March 12th when Google launched its confirmed March 2019 Core Update.

The fluctuations that started on March 27th is yet to stabilize, and more and more webmasters are taking it to forums after their website traffic got hit.

The latest tweak has come as a double blow for a few websites as they lost the traffic and organic ranking twice in the same month.

SEMRush Sensor

March Google Algorithm - SEMRush Sensor Algorithm Tracker



Google April 1 update - Algoroo

Google April 1 update


Google March 29 update - Grump

March 29 update


March Google Algorithm - RankRanger Algorithm Tracker

Google Desktop SERP

Google Officially Calls March Broad Core Algorithm Update as “March 2019 Core Update”

As I mentioned in my earlier update, Google representatives have a history of snubbing the names given by SEOs for their Algorithm Updates.

Usually, their criticism ends without attributing any names to the update but it seems like Google is now easing its muscles and giving official names to their updates.

The official Google SearchLiaison Twitter handle announced on Thursday that Google would like to call the Broad Core Algorithm Update of March 12th as “March 2019 Core Update.“

“We understand it can be useful to some for updates to have names. Our name for this update is “March 2019 Core Update.”

We think this helps avoid confusion; it tells you the type of update it was and when it happened.” Read the tweet posted on the wall of Google SearchLiaison.

With this, let’s put an end to the debate over the nomenclature and focus more on the recovery of the sites affected by the March 2019 Core Update.

Recovering Sites Affected by March 2019 Core Update AKA Florida 2

Have you been hit by the March 2019 Core Update? There are several reasons why a website may lose traffic and rankings after Google rolls out an algorithm update.

In most cases, the SEO strategies that the website used to rank in SERP backfires, and in other instances, Google finds a better site that provides superior quality content as a replacement.

In both these cases, the plunge that you’re experiencing can be reversed by implementing a well-thought-out SEO strategy with a heavy focus on Google’s E A T quality.

google algorithm update florida 2

Google algorithm update Florida 2

However, the initial analysis that we did has some good news for webmasters. The negative impact of the latest update is far less than what we thought.

Interestingly, there are more positive results, and the discussion about the same is rife across all major SEO forums.

This makes us believe that the Broad Core Algorithm update on March 12 is more of a rollback of a few previous updates that may have given undue rankings for a few websites.

Importantly, we found that sites with high authority once again received a boost in their traffic and rankings.

We also found that websites, which had a rank boost last year by building backlinks through Private Blogging Networks, were hit by March 2019 Core Update, whereas the ones that had high-quality, natural backlinks received a spike.

If you’re one of many websites that were affected by the Google March 2019 Core Update, here are a few insights about the damage caused to sites in the health niche

websites affected by florida 2 update

websites affected by florida 2 update

Health Niche

According to the data provided by SEMRush, Healthcare websites saw a massive fluctuation in traffic and rankings after the recent March 2019 Core Update.

The website has also listed a few top losers and winners. We did an analysis of the top 3 losers and here is what we found:

mylvad site analytics

Mylvad site analyticss


MyLVAD is listed as one of the top losers in the Health category according to SEMRush. The Medic Update hit this site quite badly in August 2018, and it seems like the latest March 2019 Core Update has also taken a significant toll.

According to the SEMRush data, the keyword position of MyLVAD dropped by 11 positions on Mar 13. MyLVAD is a community and resource for people suffering from advanced congestive heart failure and relying on an LVAD implant.

We did an in-depth analysis of the site and found that it does not comply with the Google E A T quality.

The resources provided on the site are not credited to experts. Crucially, the contact details are missing on the site.

Since it’s more like a forum, the website has more user-generated content. As this website falls under the YMYL category, it’s imperative to provide the author bio (designation of the doctor in this specific case) to increase the E A T rating.

Also, details such as ‘contact us’ and the people responsible for the website are missing.

painscale analytics semrush

Painscale analytics semrush


The PainScale is a website, which also has an app that helps manage pain and chronic disease. The site got a rank boost after the September 2018 update, and until December that year, everything was running smoothly.

The traffic and ranking started displaying a downward trend after January, and now the Florida 2 Update has reduced it further.

On analyzing the website, we found that it provides users with information about pain management. Once again, the authority of the content published on this website is questionable.

Though the site has rewritten a few contents from Mayo Clinic and other authority sites, aggregation is something that Google does not like.

The website also has a quiz section that provides tools to manage pain. However, the website tries to collect the health details of the users and then asks them to sign up for PainScale for FREE.

Google has an aversion to this particular method, as they are concerned about the privacy of its users. This could be one of the reasons for the drop in traffic and rankings of PainScale after the March 2019 Core Update.

Medbroadcast traffic loss

Medbroadcast traffic loss


This is yet again another typical example of a YMYL website that Google puts under intense scrutiny. Medbroadcast gives a lot of information regarding health conditions and tries to provide users with treatment options.

Here again, like other websites on this list, there is no information regarding the author.

Moreover, the site has a strange structure with a few URLs opening in sub domains. The website has also placed close to 50 URLs towards the footer of the homepage and other inside pages, making it look very spammy.

This site also received undue traffic boosts after the Google Medic Update of August 2018. The stats show that the traffic increased after the Medic Update and started to decline at the beginning of January.

Once again, the impetus is on E A T quality signals. The three examples listed above points to how healthcare sites that failed to follow practices mentioned in the Google Quality Rater Guidelines were hit by the Florida 2 Update.  

Here are a few tips to improve your website’s E A T rating:

1. Add Author Byline to All Blog Posts

Google wants to know the authenticity of the person who is providing information to users.

If the site falls under the YMYL category, which includes websites in the healthcare, wellness, and finance sectors, the author should be someone who is an expert in that field. Google wants to ascertain that trustworthy and certified authors draft the content displayed to its users.

Getting the content written by content writers is a trend largely seen among YMYL sites. Nevertheless, Google hates it and only wants to promote high-quality, trustworthy content.

2. Remove Scraped/Duplicate Content

Google calculates the E A T score of a website by analyzing individual posts and pages. If you’re engaging, scrapping or duplicating content from another website, chances are you may be hit by an algorithm update.

As seen in one of the above-mentioned examples, paraphrasing content doesn’t make it unique, and Google can identify these types of content very easily.

So, if you think your website has thin, scrapped or rewritten content, it would be ideal to remove it. For YMYL websites, ensure that the content is written by an expert in your niche.

3. Invest Time in Personal Branding

Make sure that your website has an “About Us” page and you’re providing valuable inputs to Google in the form of schema markup.

In addition, positive testimonials and customer reviews, both within the site and outside, can boost the trustworthiness of your website.

Google’s quality rater guidelines also ask webmasters to display the contact information and customer support details for YMYL sites.

4. Focus on the Quality of Back-Links Than Quantity

The Google search quality guidelines suggest that websites with high-quality backlinks have a superior E A T score.

Investing time in building low-quality links through PBNs and blog comments may invite Google’s wrath and can adversely affect a website’s E A T rating.

It’s highly recommended to use white hat techniques such as blogger outreach and broken link building as part of the SEO strategy to get high-quality links.

5.  Secure Your Site With HTTPS

The security of its users is a priority for Google, and that’s why it’s pushing websites to get SSL certified. HTTPS is now one of the ranking factors, and it’s also one of the ways to improve the E A T rating of websites.

Also, it has to be noted that Google Chrome now shows all non-HTTPS sites as insecure, which is a clear indication of how Google values the privacy of its users.

To know more about the factors that Google use to rank health websites, read our in-depth article, “Advanced SEO for Healthcare and Medical Websites: Tips to Improve Search Quality Rating“.

Here are a few rumors from the Black Hat World about the Latest Google March 2019 Core Update AKA the Florida 2 Update

“There are 5 projects of mine which are very, very similar to each other. All targeting Beauty/Health niche, all of them have a lot of great content (10k+ articles) and all of them are build on expired domains with a nice brandable name,” says a user going by the name yayapart.

“The G Core Update hit 4 of my 5 projects. One of them, actually the oldest one of them, got a huge push and increased it’s organic traffic about 40%.

The other 3 projects that are affected lost all their best rankings. I don’t see any pattern here yet but it hit me hard,” he added

Another user going by the name Jeepy says, “Health Niche. This site got hit by medic update and now it’s rising back without doing anything. Right…”

Black Hat World Chat


Confirmed: Google March 2019 Core Update AKA Florida 2 Update – March 12, 2019

The official SearchLiaison Twitter handle of Google confirmed that a Broad Core Algorithm update has started rolling out on March 12th.

Like other broad core algorithm updates, the latest one will be rolled out in phases, and we are not sure when the “SERP dance” will stabilize. SEOs started calling it the Florida 2 Update.

“This week, we released a Broad Core Algorithm update, as we do several times per year. Our guidance about such updates remains as we’ve covered before,” read the tweet post on the official Google SearchLiaison handle.

Read our blog to know more about the latest Broad Core Algorithm Update.

Our analysis found that the Google March 2019 Core Update reversed the undue rankings that a few websites got after the Medic Update of August 2018.

In addition to this, most of the sites hit by the update used low-quality links to increase their authority.

See how ranking sensors detected the change:


August Google Algorithm - Moz Weather Algorithm Tracker

SEMRush Sensor

August Google Algorithm - SEMRush Sensor Algorithm Tracker


August Google Algorithm - AccuRanker Algorithm Tracker

Advanced Web Rankings

August Google Algorithm - Advanced Web Rankings Algorithm Tracker


August Google Algorithm - RankRanger Algorithm Tracker


August Google Algorithm - Algoroo Algorithm Tracker

Google Algorithm Update – March 1st, 2019

In my earlier update, I had predicted that Google was preparing for some big changes in the coming days. Now, validating my ESP, the latest Google algorithm update rolled out on March 1st seems to be bigger than what we initially thought.

There are reports that Google displayed more than 19 search results on a single page for a few search terms during this period.

We will be doing a detailed blog post about this in a few days. There are also murmurs about Google giving more preference to websites that have in-depth content.

Interestingly, Dr. Peter J. Meyers of Moz found that Google displays well-researched articles in the results even for a few buyer intent keywords.

Google possibly wants a mix of products and information to feature in its search. This way, users who are confused with products can read and be informed before making the purchase decision.

There are also chatters about the drop in the number of image search results after the latest Google update on March 1st.

An interesting analytics screenshot shared by Marie Haynes depicts how one of her clients got a massive boost in organic traffic during February 27th and March 1st.

See how ranking sensors detected the change:


March Google Algorithm Update - AccuRanker Algorithm Tracker

Advanced Web Rankings

March Google Algorithm Update - Advanced Web Rankings Algorithm Tracker


March Google Algorithm Update - Algooro Algorithm Tracker


March Google Algorithm Update - MozCast Algorithm Tracker


March Google Algorithm Update - RankRanger Algorithm Tracker

SEMRush Sensor

March Google Algorithm Update - SEMrush Sensor Algorithm Tracker


March Google Algorithm Update - SERPmetrics Algorithm Tracker

Google Algorithm Update – February 27, 2019

Google seems to be making a lot of tweaks to its algorithm this month as there is yet again a spike in the Algorithm trackers, suggesting the roll out of an Algorithm Update.

A similar spike was witnessed on the 22nd of this month but the chatters soon subsided, probably due to the less impact it had on websites.

However, it seems like something big is brewing for the coming days that will make significant changes to the SERP results in Google.


27 February 2019 Google Algorithm Update - Accuranker Algorithm Tracker

Advanced Web Ranking

27 February 2019 Google Algorithm Update - Advanced Webranking Algorithm Tracker


27 February 2019 Google Algorithm Update - MozCast Algorithm Tracker

Rank Ranger

27 February 2019 Google Algorithm Update - RankRanger Algorithm Tracker

SEMRush Sensor

27 February 2019 Google Algorithm Update - SemRush Sensor Algorithm Tracker


27 February 2019 Google Algorithm Update - SERP Metrics Algorithm Tracker

Google Algorithm Update- February 22, 2019

There are talks about an algorithm update, but this time, the impact is not so far-reaching. All major algorithm trackers detected a sudden spike in their ranking sensors, but it didn’t sustain for a long duration.

It seems like Google may have done a little tweaking to their algorithm during the weekend, especially on Friday.

Other than a few chatters here and there regarding lost traffic, there is nothing concrete to underscore this as a significant algorithm update.

Since it’s widely talked about in blogs and SEO forums, let’s look at a few trackers that sensed the update.

Advanced Web Ranking

22 February 2019 Google Algorithm Update - Advanced Web Ranking Algorithm Tracker


22 February 2019 Google Algorithm Update - Algoroo Algorithm Tracker


22 February 2019 Google Algorithm Update - Accuranker Algorithm Tracker


22 February 2019 Google Algorithm Update - MozCast Algorithm Tracker


22 February 2019 Google Algorithm Update - RankRanger Algorithm Tracker

SemRush Sensor

22 February 2019 Google Algorithm Update - SemRush Sensor Algorithm Tracker

SERP Metrics

22 February 2019 Google Algorithm Update - SERP Metrics Algorithm Tracker

Google Algorithm Update – February 5, 2019

There is a lot of chatter about an algorithm update from Google that has affected many websites, mostly the ones that are in the UK.

The stats were more-or-less calm after Jan 16th, but we have noticed a sudden increase in the stats on all major algorithm update trackers.

Some algorithm trackers suggest the update rolled out during February 5-6 is more devastating than the one rolled out in January.

We will keep you posted as we dig deeper into the sites affected by the algorithm update and diagnose the reasons for the drop in rankings.

Here are a few stats:


5 February 2019 Google Algorithm Update - SERP Metrics Algorithm Tracker


5 February 2019 Google Algorithm Update - AccuRanker Algorithm Tracker

Moz Weather

5 February 2019 Google Algorithm Update - Moz Weather Algorithm Tracker

Rank Ranger

5 February 2019 Google Algorithm Update - RankRanger Algorithm Tracker

SEMRush Sensor

SEMrush Tracker Feb 7

Google Algorithm Update – January 18, 2019

We had announced earlier this week that Google rolled out an incremental Algorithm update – the first in 2019.

The impact of the Algorithm Update was hard on News websites and Blogs of various niches. This is why we named the latest Google Update as the “Newsgate Algorithm Update.”

A new document released by Google on 16th January 2019 corroborates our findings as it provides advice and tips for the news publishers to get more success in 2019.

Our analysis had found the algorithm update rolled out during the second week of January affected the news sites that “rewrote content” or “scraped content from other sites.”

The latest document released by Google proves that we were right. There are two specific sections in the document titled “Ways to succeed in Google News,” which recommend News Publishers to stay away from publishing rewritten and scraped content.

What the Google document says:Google Algorithm Update on Content

  • Block scraped content: Scraping commonly refers to taking material from another site, often on an automated basis. Sites that scrape content must block scraped content from Google News.
  • Block rewritten content: Rewriting refers to taking material from one site and then rewriting that material so that it is not identical. Sites that rewrite content in a way that provides no substantial or clear added value must block that rewritten content from Google News. This includes, but is not limited to, rewrites that make only very slight changes or those that make many word replacements but still keep the original article’s overall meaning.

In addition to this, the Ways to Succeed in Google News also highlights a few best practices that News Publishers have to keep in mind before publishing a story.

    • Descriptive and Clear Titles
    • Displaying accurate date and time using structured data
    • Avoid duplicate, rewritten or scraped content
  • Using HTTPS for all posts and pages

The advice and tips provided in the document can help news websites affected by the latest Google algorithm update to recover.

Also, Google has put impetus to the transparency of the News Publisher, which has to do more with the search engine giant’s EAT Guidelines.

The new Google News guidelines ask publishers to be transparent by letting the readers know who published the content.

The advice Google gives is to include a clear byline, short description of the author, and the contact details of the publication.

According to Google, providing these details to the readers and to the Google bot can help in filtering out the “sites or accounts that impersonate any person or organization, or that misrepresent or conceal their ownership or primary purpose.”

Google also warns news publishers not to engage in link schemes that are intended to manipulate PageRank of other websites.

Google Algorithm Update – January 16, 2019

It looks like Google has rolled out its first significant algorithm update for 2019. This time, the target seems to be news sites and blogs!

The data from SEMrush Sensor suggests that the “Google Newsgate Algorithm Update” has touched a high-temperature zone of 9.4 on Wednesday.

The speculation about an update was in the air for the last few days. Now, it looks like the search engine giant has come out with an incremental update on Wednesday night.

The websites that were hit the worst include, ABC’s WBBJTV, FOX’s KTVU and CBS17.

SEMrush Tracker Jan 9Google announced earlier that every year, it rolls out around 500–600 core algorithm updates. In addition to this, there are broad core algorithm updates that Google rolls out three to four times a year.

These updates come with a major rank shift in the SERP with a few websites seeing a spike in organic rankings while others experience a dip.

Google has not yet confirmed if the algorithm update rolled out during the second week of January is a core or a broad core update. However, the update seems to have made drastic changes to the results shown in the featured snippets.

In addition to the news websites, the “Google Newsgate Algorithm Update 2019” has also affected blogs in niches such as sports, education, travel, government, and automotive sites.

According to the Google Algorithm Weather Report by MozCast, the climate was rough during the 9th and 10th, suggesting an algorithm update.

19 January 2019 Google Algorithm Update - MozCast Algorithm TrackerThe graph has shown significant fluctuations in the weather, especially during January 5th and 6th. After a few regular days, the weather deteriorated further, which may be a signal of two separate Google Algorithm updates within the same week.

SEO communities are rife with discussions about the update as many websites were affected by the algorithm update in the last few days.

“Travel – All whitehat, good links, fresh content, aged domain, and all the good stuff. Was some dancing around Dec and then, wham, 3rd page,” said zippyants a Black Hat Forum member on Thursday.

“Big changes happening in the serps since Friday for us. Anyone noticing an uptick or downward slide of long-tail referrals? First time we’ve seen much since the big changes in August/September,” asked a user SnowMan68 via Webmaster World.

“Yes! Today, the signals are quite intense. Probably going on for past 4 days No changes seen on the sites though,” answered a Webmaster World user arunpalsingh to one of the questions asked in the forum.

In addition to this, the Google Grump Tool from AccuRanker has also suggested a “furious” last two days. This may be an indication that the algorithm update was rolled out in phases.

19 January 2019 Google Algorithm Update - AccuRanker Algorithm TrackerAccording to our early analysis, the sites that were affected by Google’s first Algorithm Update in 2019 are the ones that publish questionable news.

Also, we saw a nosedive in the traffic of news sites that rewrote content without including any newly added values.

Algoroo, another Google Algorithm tracker, has added TechCrunch and CNBC to the top losers list. This, yet again, stands as evidence to our understanding that the update is intended for News websites and blogs of different industry algorithm update 2019 statsLast year, Google rolled out the infamous Medic Update targeting wellness and YMYL websites. The impact was huge and many websites that were affected are yet to come to terms with the traffic loss.

We found that the sites impacted by the Medic Update were lacking the E.A.T (Expertise, Authority, Trustworthiness) quality signals. A few days after this, Google confirmed the same saying the update had nothing to do with user experience.

Some websites hit by the Medic Update made remarkable comebacks after the algorithm update in November. The sites that recovered from the Medic Update created quality content based on the Google EAT guidelines.

The rollout of the update was completed on Sunday as all the sensors had cooled down by Monday. We will soon do a detailed analysis of the sites that were affected by the “Google Newsgate Algorithm.”

This will help you understand why the sites were affected and how they can recover from the latest Google Algorithm update.

Maccabees Update

Maccabees is the update that wrangled people between 12th to December 14th, 2017. There were considerable discussions on Twitter, Facebook, and SEO forums regarding the loss of up to 30 percent traffic from some websites.

If you happened to be an owner of such a website, then you may have been a victim of ‘Google Maccabees Update.’

This update had hit hundreds of websites, and the reason behind it was the fact that these websites had multiple pages filled with huge keyword permutations. The update was framed to catch the long-tail keywords used with permutations because search results prefer pages with long-tail keywords. The majority of the sites hit by Maccabees are from e-commerce, affiliate websites, travel, and real estate sites.

For instance, a travel agency has multiple keywords, and a few of them that were flagged are as follows:

  1. Low-cost holiday package to Switzerland.
  2. Cheap Switzerland holiday package.
  3. Low-cost tickets to Switzerland.

Similarly, an affiliate website has multiple pages containing the following:

  1. Avoid mosquitoes at home.
  2. Get rid of mosquitoes.
  3. Wipeout mosquitoes.
  4. Wipeout mosquitoes fast.

You may guess why they were aiming for long-tail keywords and why SEO had such keen eyes for those keywords. Because, though they seem similar, all keywords are huge traffic drivers.

There is no formal name for this update. However, informally, in remembrance of Hanukkah and the search community, Barry Schwartz of SERoundtable, called it ‘Maccabees.’

A Google spokesperson stated that the changes in the algorithm are meant to make the search results more relevant. The relevance may come from on-page or off-page content, or sometimes, both.

9 Major Google Algorithm Updates before 2018

Whenever there’s a new update from Google, websites are either affected positively or negatively. Discussed below are the major algorithm updates by Google before 2018. 

  1. Panda
  2. Penguin
  3. Hummingbird
  4. Pigeon
  5. Mobile
  6. RankBrain
  7. Possum
  8. Fred

What is Google Panda Update?

Google Panda update was released in February 2011, which aimed to lower the rankings of websites with thin or poor-quality content and bring sites with high-quality content to the top of the SERPs. The Panda search filter keeps updating from time to time, and sites escape from the penalty once they make appropriate changes to the website.

Here’s a brief of its hazards and how you can adjust to Panda update:

Targets of Panda

  • Plagiarized or thin content
  • Duplicate content
  • Keyword stuffing
  • User-generated spam

Panda’s workflow

Panda allocates quality scores to the pages based on the content quality and ranked them in SERP. Panda updates are more frequent; hence, the penalties and recoveries as well.

How to adapt

Keep regular track on web pages to check for plagiarized content and thin content or keyword stuffing. You can do that by using certain quality checking tools like Siteliner, Copyscape.

 What is Google Penguin Update?

Google’s Penguin update was released in April 2012. It aims to filter out websites that boost rankings in SERPs through spammy links, i.e., purchased low-quality links to boost Google ranking.

Targets of Penguin

  • Links with over-optimized text
  • Spammy links
  • Irrelevant links

Penguin’s workflow

It works by lowering the ranks of sites with manipulative links. It checks the quality of backlinks and degrades the sites with low-quality links.

How to adapt

Keep tracking your profile growth by the links and run regular checks on backlinks to audit the quality of the links. You can use certain tools like SEO SpyGlass to analyze your sites and eventually help you adapt to Penguin’s update.

What is Google Hummingbird Update?

It is considered to be the most significant algorithm update that Google came out with since Penguin. The update was introduced to emphasis on natural language queries, preferring context over individual keywords. 

While keywords are still vital, the Hummingbird helps a page to rank well even if the query doesn’t contain exact terms in the sentence.

Targets of Hummingbird

  • Low-quality content
  • Keyword stuffing

Hummingbird’s workflow

It helps Google to fetch the web pages as per the complete query asked by the user instead of searching for individual terms in the query. However, it relies on the importance of keywords also while ranking a web page in the SERPs.

How to adapt

Check your content to increase the research on keywords and focus on conceptual queries. Additionally, search for related queries, synonymic queries, and co-occurring words or terms. You can easily get these ideas from the Google Autocomplete or the Google Related Searches.

What is Google Pigeon Update? 

It is the Google update released on July 24, 2014, for the US and expanded to the United Kingdom, Canada, and Australia on December 22, 2014. It is aimed to enhance the rankings of local listings for a search query. The changes also affect the Google maps search results along with the regular Google search results.

Targets of Pigeon

  • Perilous on-page optimization
  • Poor off-page optimization

Pigeon’s workflow

Pigeon works to rank local results based on the user’s location. The update developed some ties between the traditional core algorithm and the local algorithm.

How to adapt

Put in hard efforts when it comes to off-page and on page-SEO. It is best to start with some on-page SEO, and later you can adopt the best off-page SEO techniques to rank top in Google SERPs. One of the best ways to get yourself listed off-page is in the local listings.

Also, Read>>How to Top-Rank Your Business on Google Maps Search Results

What is Google Mobile-Friendly Update?

Google Mobile-Friendly (aka Mobilegeddon) algorithm update was launched on April 21st, 2015. It was designed to give a boost to mobile-friendly pages in Google’s mobile search results while filtering out pages that are not mobile-friendly or optimized for mobile viewing.

Targets of Mobile update

  • The poor mobile user interface
  • Lack of mobile-optimized web page

Mobile update workflow

Google mobile-friendly update aims to rank web pages at the top of the SERP that supports mobile-viewing and downgrade web pages that are unresponsive or unsupportive on mobile devices.  

How to adapt

Tweak the web design to provide better mobile usability and reduce the loading time. Google’s mobile-friendly test will help you to identify the changes to cope up with various versions of the mobile software.

Also, Read>> 6 Common Web Design Mistakes That Hurt Search Engine Optimization of Your Website

What is Google RankBrain Update?

As reported by Bloomberg and Google, RankBrain is an algorithm update that is a machine-learning artificial intelligence system launched to process the search engine results efficiently. It was launched on October 26th, 2015.

Target of RankBrain

  • Poor user-interface
  • Insubstantial content
  • Irrelevant features on the web page

RankBrain workflow

RankBrain is a machine learning system released to understand the meaning of the queries better and provide relevant content to the audience.

It is a part of Google’s Hummingbird algorithm. It ranks the web pages based on the query-specific features and relevancy of a website.

How to adapt

By conducting competitive analysis, optimize the webpage for comprehensiveness and content relevancy. You can use tools like SEMRush, Spyfu to analyze the concept, terms, and subjects used by the high-ranking competitor web pages. This is a perfect way to outmatch your competitors.

What is Google Possum Update?

Possum was the Google algorithm update released on September 1st, 2016. It is considered to be the most significant algorithm update after the Pigeon, 2014 update. The update focused on improving business rankings that fell out of physical city limits and filtering business listings based on address and affiliations. 

Targets of Possum

  • Tough Competition in your target location

Possum’s workflow

The search results are provided depending on the searcher’s location. The closer you are to a given business, you are more likely to see it at the top of the local search results. Fascinatingly, Possum also provided the results of well-known companies located outside the physical city area.

How to adapt

Increase your keywords list and do location-specific rank tracking. Local businesses should focus on more keywords because of volatility Possum brought into the SERPs.

What is Google Fred Update?

Fred is the Google algorithm update released on March 8, 2017.

Targets of Fred update

  • Affiliate heavy-content
  • Ad centered content or
  • Thin content

Fred’s workflow

This update targets the web pages violating the guidelines of Google webmaster. The primary web pages affected by this are blogs containing low-quality content and majorly targeting the audience to make revenue by driving traffic.

How to adapt

Remove thin content by analyzing it through Google Search Quality Guidelines. If you’re allowing advertisements on your pages, make sure they’re on the pages with high-quality and useful content for the users. Please don’t try to manipulate Google into thinking that your page is high-quality content, when it is, instead, full of affiliate links.

MCB Love to Mention : )

Content Courtesy →

Google Algorithm Update 2020: Unconfirmed Update
Have A Views ?
Pay A Visit :

MCB-Rhythm Of Algorithm

How useful was this post?

Click on a star to rate it!




Permanent link to this comic:
Image URL (for hotlinking/embedding):

[[Two women and a man are standing around, talking.]] Woman: Our lab is studying a fungus that takes over mammal brains and makes them want to study fungi. Man: It’s very promising! We’re opening a whole new wing of the lab just to cultivate it! {{Title text: Conspiracy theory: There’s no such thing as corn. Those fields you see are just the stalks of a fungus that’s controlling our brains to make us want to spread it.}}

MCB Love to Mention : )

Content Courtesy →

Have A Views ?
Pay A Visit :

MCB-Rhythm Of Algorithm

How useful was this post?

Click on a star to rate it!


The Algorithm: Idiom of Modern Science

by Bernard Chazelle

T hen the great Dane of 20th century physics, Niels Bohr, was not busy chewing on a juicy morsel of quantum mechanics, he was known to yap away witticisms worthy of Yogi Berra. The classic Bohrism “Prediction is difficult, especially about the future” alas came too late to save Lord Kelvin. Just as physics was set to debut in Einstein’s own production of Extreme Makeover, Kelvin judged the time ripe to pen the field’s obituary: “There is nothing new to be discovered in physics now.” Not his lordship’s finest hour.

Nor his worst. Aware that fallibility is the concession the genius makes to common mortals to keep them from despairing, Kelvin set early on to give the mortals much to be hopeful about. To wit, the thermodynamics pioneer devoted the first half of his life to studying hot air and the latter half to blowing it. Ever the perfectionist, he elevated to an art form the production of pure, unadulterated bunk: “X-rays will prove to be a hoax”; “Radio has no future”; “Heavier-than-air flying machines are impossible”; and my personal favorite, “In science there is only physics; all the rest is stamp collecting.” Kelvin’s crystal ball was the gift that kept on giving.

Soon, my friends, you will look at a child’s
homework — and see nothing to eat.

Gloat not at a genius’ misfortunes. Futurologitis is an equal-opportunity affliction, one hardly confined to the physicist’s ward. “I think there is a world market for maybe five computers,” averred IBM Chairman, Thomas Watson, a gem of prescience matched only by a 1939 New York Times editorial: “The problem with television is that people must sit and keep their eyes glued to the screen; the average American family hasn’t time for it.” The great demographer Thomas Malthus owes much of his fame to his loopy prediction that exponentially increasing populations would soon outrun the food supply. As the apprentice soothsayer learns in “Crystal Gazing 101,” never predict a geometric growth!

Apparently, Gordon Moore skipped that class. In 1965, the co-founder of semiconductor giant Intel announced his celebrated law: Computing power doubles every two years. Moore’s Law has, if anything, erred on the conservative side. Every eighteen months, an enigmatic pagan ritual will see white-robed sorcerers silently shuffle into a temple dedicated to the god of cleanliness, and soon reemerge with, on their faces, a triumphant smile and, in their hands, a silicon wafer twice as densely packed as the day before. No commensurate growth in human mental powers has been observed: this has left us scratching our nonexpanding heads, wondering what it is we’ve done to deserve such luck.

To get a feel for the magic, consider that the latest Sony PlayStation would easily outpace the fastest supercomputer from the early nineties. If not for Moore’s Law, the Information Superhighway would be a back alley to Snoozeville; the coolest thing about the computer would still be the blinking lights. And so, next time you ask who engineered the digital revolution, expect many hands to rise. But watch the long arm of Moore’s Law tower above all others. Whatever your brand of high-tech addiction, be it IM, iPod, YouTube, or Xbox, be aware that you owe it first and foremost to the engineering wizardry that has sustained Moore’s predictive prowess over the past forty years.

Enjoy it while it lasts, because it won’t. Within a few decades, say the optimists, a repeal is all but certain. Taking their cue from Bill Gates, the naysayers conjure up the curse of power dissipation, among other woes, to declare Moore’s Law in the early stage of rigor mortis. Facing the bleak, sorrowful tomorrows of The Incredible Shrinking Chip That Won’t Shrink No More, what’s a computer scientist to do?

The rule of law

Break out the Dom and pop the corks, of course! Moore’s Law has added fizz and sparkle to the computing cocktail, but for too long its exhilarating potency has distracted the party-goers from their Holy Grail quest: How to unleash the full computing and modeling power of the Algorithm. Not to stretch the metaphor past its snapping point, the temptation is there for the Algorithmistas (my tribe) to fancy themselves as the Knights of the Round Table and look down on Moore’s Law as the Killer Rabbit, viciously elbowing King Arthur’s intrepid algorithmic warriors. Just as an abundance of cheap oil has delayed the emergence of smart energy alternatives, Moore’s Law has kept algorithms off center stage. Paradoxically, it has also been their enabler: the killer bunny turned sacrificial rabbit who sets the track champion on a world record pace, only to fade into oblivion once the trophy has been handed out. With the fading imminent, it is not too soon to ask why the Algorithm is destined to achieve celebrity status within the larger world of science. While you ask, let me boldly plant the flag and bellow the battle cry:

The Algorithm’s coming-of-age as the new language of science promises to be the most disruptive scientific development since quantum mechanics.
If you think such a blinding flare of hyperbole surely blazed right out of Lord Kelvin’s crystal ball, read on and think again. A computer is a storyteller and algorithms are its tales. We’ll get to the tales in a minute but, first, a few words about the storytelling.

Computing is the meeting point of three powerful concepts: universality, duality, and self-reference. In the modern era, this triumvirate has bowed to the class-conscious influence of the tractability creed. The creed’s incessant call to complexity class warfare has, in turn, led to the emergence of that ultimate class leveler: the Algorithm. Today, not only is this new “order” empowering the e-technology that stealthily rules our lives; it is also challenging what we mean by knowing, believing, trusting, persuading, and learning. No less. Some say the Algorithm is poised to become the new New Math, the idiom of modern science. I say The Sciences They Are A-Changin’ and the Algorithm is Here to Stay.

Reread the previous paragraph. If it still looks like a glorious goulash of blathering nonsense, good! I shall now explain, so buckle up!

The universal computer

Had Thomas Jefferson been a computer scientist, school children across the land would rise in the morning and chant these hallowed words:

We hold these truths to be self-evident, that all computers are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Universality and the separation of Data, Control, and Command.
Computers come in different shapes, sizes, and colors, but all are created equal—indeed, much like 18th century white male American colonists. Whatever the world’s fastest supercomputer can do (in 2006, that would be the IBM Blue Gene/L), your lowly iPod can do it, too, albeit a little more slowly. Where it counts, size doesn’t matter: all computers are qualitatively the same. Even exotic beasts such as quantum computers, vector machines, DNA computers, and cellular automata can all be viewed as fancy iPods. That’suniversality!
The field of computing later
opened up to men

Here’s how it works. Your iPod is a tripod (where did you think they got that name?), with three legs called control, program, and data. Together, the program and the data form the two sections of a document [program | data] that, to the untrained eye, resembles a giant, amorphous string of 0s and 1s. Something like this:

[ 1110001010110010010 | 1010111010101001110 ]

Each section has its own, distinct purpose: the program specifies instructions for the control to follow (eg, how to convert text into pdf); the data encodes plain information, like this essay (no, not plain in that sense). The data string is to be read, not to be read into. About it, Freud would have quipped: “Sometimes a string is just a string.” But he would have heard, seeping from the chambers of a program, the distant echoes of a dream: jumbled signs crying out for interpretation. To paraphrase the Talmudic saying, an uninterpreted program is like an unread letter. The beauty of the scheme is that the control need not know a thing about music. In fact, simply by downloading the appropriate program-data document, you can turn your iPod into: an earthquake simulator; a word processor; a web browser; or, if downloading is too much, a paperweight. Your dainty little MP3 player is a universal computer.

The control is the computer’s brain and the sole link between program and data. Its only function in life is to read the data, interpret the program’s orders, and act on them—a task so pedestrian that modern theory of reincarnation ranks the control as the lowest life form on the planet, right behind the inventor of the CD plastic wrap. If you smash your iPod open with a mallet and peek into its control, you’ll discover what a marvel of electronics it is—okay, was. Even more marvelous is the fact that it need not be so. It takes little brainpower to follow orders blindly (in fact, too much tends to get in the way). Stretching this principle to the limit, one can design a universal computer with a control mechanism so simple that any old cuckoo clock will outsmart it. This begs the obvious question: did Orson Welles know that when he dissed the Swiss and their cuckoo clocks in “The Third Man”? It also raises a suspicion: doesn’t the control need to add, multiply, divide, and do the sort of fancy footwork that would sorely test the nimblest of cuckoo clocks?

Alas, it still strikes the hours

No. The control on your laptop might indeed do all of those things, but the point is that it need not do so. (Just as a bank might give you a toaster when you open a new account, but it need not be a toaster; it could be a pet hamster.) Want to add? Write a program to add. Want to divide? Write a program to divide. Want to print? Write a program to print. A control that delegates all it can to the program’s authority will get away with a mere two dozen different “states”—simplicity a cuckoo clock could only envy. If you want your computer to do something for you, don’t just scream at the control: write down instructions in the program section. Want to catch trouts? Fine, append a fishing manual to the program string. The great nutritionist Confucius said it better: “Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime.” The binary view of fishing = river + fisherman makes way for a universal one: fishing = river + fishing manual + you. Similarly,

computing = data + program + control.

This tripodal equation launched a scientific revolution, and it is to British mathematician Alan Turing that fell the honor of designing the launching pad. His genius was to let robots break out of the traditional binary brain-brawn mold, which conflates control and program, and embrace the liberating “tripod-iPod” view of computing. Adding a third leg to the robotic biped ushered in the era of universality: any computer could now simulate any other one.

Underpinning all of that, of course, was the digital representation of information: DVD vs VCR tape; piano vs violin; Anna Karenina vs Mona Lisa. The analog world of celluloid film and vinyl music is unfit for reproduction: doesn’t die; just fades away. Quite the opposite, encoding information over an alphabet opens the door to unlimited, decay-free replication. In a universe of 0s and 1s, we catch a glimpse of immortality; we behold the gilded gates of eternity flung wide open by the bewitching magic of a lonely pair of incandescent symbols. In short, analog sucks, digital rocks.

Two sides of the same coin

Load your iPod with the program-data document [Print this | Print this]. Ready? Press the start button and watch the words “Print this” flash across the screen. Exciting, no? While you compose yourself with bated breath amid the gasps and the shrieks, take stock of what happened. To the unschooled novice, data and program may be identical strings, but to the cuckoo-like control they couldn’t be more different: the data is no more than what it is; the program is no less than what it means. The control may choose to look at the string “Print this” either as a meaningless sequence of letters or as an order to commit ink to paper. To scan symbols mulishly or to deforest the land: that is the option at hand here—we call it duality.

So 1907, I almost hear you sigh. In that fateful year, Ferdinand de Saussure, the father of linguistics, announced to a throng of admirers that there are two sides to a linguistic sign: its signifier (representation) and its signified (interpretation). A string is a sign that, under the watchful eye of the control, acts as signifier when data and as signified when a program.

Saussure’s intellectual progeny is a breed of scholars known as semioticians. Funny that linguists, of all people, would choose for themselves a name that rhymes with mortician. Funny or not, semiotics mavens will point out the imperfect symmetry between program and data. The latter is inviolate. Signifiers must be treated with the utmost reverence: they could be passwords, hip-hop rhymes, or newfound biblical commandments. Mess with them at your own peril.

Programs are different. The encoding of the signified is wholly conventional. Take the program “Print this”, for example. A francophonic control would have no problem with “Imprimer ceci ” or, for that matter, with the obsequious “O, control highly esteemed, may you, noblest of cuckoos, indulge my impudent wish to see this humble string printed out, before my cup runneth over and your battery runneth out.” The plethora of programming languages reveals how so many ways there are of signifying the same thing. (Just as the plethora of political speeches reveals how so many ways there are of signifying nothing.)

Sensing the comic, artistic, and scholarly potential of the duality between program and data, great minds went to work. Abbott and Costello’s “Who’s on First?” routine is built around the confusion between a baseball player’s nickname (the signifier) and the pronoun “who” (the signified). Magritte’s celebrated painting “Ceci n’est pas une pipe” (this is not a pipe) plays on the distinction between the picture of a pipe (the signifier) and a pipe one smokes (the signified). The great painter might as well have scribbled on a blank canvas: “Le signifiant n’est pas le signifié ” (the signifier is not the signified). But he didn’t, and for that we’re all grateful.

English scholars are not spared the slings and arrows of duality either. How more dual can it get than the question that keeps Elizabethan lit gurus awake at night: “Did Shakespeare write Shakespeare?” And pity the dually tormented soul that would dream up such wacky folderol: “Twas brillig, and the slithy toves Did gyre and gimble in the wabe; All mimsy were the borogoves, And the mome raths outgrabe.

Say it ain’t true

I am lying. Really? Then I am lying when I say I am lying; therefore, I am not lying. Yikes. But if I am not lying then I am not lying when I say I am lying; therefore, I am lying. Double yikes. Not enough yet? Okay, consider the immortal quip of the great American philosopher Homer Simpson: “Oh Marge, cartoons don’t have any deep meaning; they’re just stupid drawings that give you a cheap laugh.” If cartoons don’t have meaning, then Homer’s statement is meaningless (not merely a philosopher, the man is a cartoon character); therefore, for all we know, cartoons have meaning. But then Homer’s point is… Doh! Just say it ain’t true. Ain’t true? No, please, don’t say it ain’t true! Because if it ain’t true then ain’t true ain’t true, and so…


Beware of self-referencing, that is to say, of sentences that make statements about themselves. Two of the finest mathematical minds in history, Cantor and Gödel, failed to heed that advice and both went stark raving bonkers. As the Viennese gentleman with the shawl-draped couch already knew, self-reference is the quickest route to irreversible dementia.

Escher’s reproductive parts

It is also the salt of the computing earth. Load up your iPod, this time with the program-data document [Print this twice | Print this twice]. Push the start button and see the screen light up with the words: “Print this twice Print this twice”. Lo and behold, the thing prints itself! Well, not quite: the vertical bar is missing. To get everything right and put on fast track your career as a budding computer virus artist, try this instead: [Print this twice, starting with a vertical bar the second time | Print this twice, starting with a vertical bar the second time]. See how much better it works now! The key word in the self-printing business is “twice”: “never” would never work; “once” would be once too few; “thrice”?? Please watch your language.

Self-reproduction requires a tightly choreographed dance between: (i) a program explaining how to copy the data; (ii) a data string describing that very same program. By duality, the same sequence of words (or bits) is interpreted in two different ways; by self-reference, the duality coin looks the same on both sides. Self-reference—called recursion in computer parlance—requires duality; not the other way around. Which is why the universal computer owes its existence to duality and its power to recursion. If Moore’s Law is the fuel of Google, recursion is its engine.

The tripodal view of computing was the major insight of Alan Turing—well, besides this little codebreaking thing he did in Bletchley Park that helped win World War II. Not to discount the lush choral voices of Princeton virtuosos Alonzo Church, Kurt Gödel, and John von Neumann, it is Maestro Turing who turned into a perfect opus the hitherto disjointed scores of the computing genre.

Mother Nature, of course, scooped them all by a few billion years. Your genome consists of two parallel strands of DNA that encode all of your genetic inheritance. Your morning addiction to Cocoa Puffs, your night cravings for Twinkies? Yep, it’s all in there. Now if you take the two strands apart and line them up, you’ll get two strings about three billion letters long. Check it out:


There they are: two twin siblings locking horns in a futile attempt to look different. Futile because if you flip the As into Ts and the Cs into Gs (and vice versa) you’ll see each strand morph into the other one. The two strings are the same in disguise. So flip one of them to get a more symmetric layout. Like this:


Was I the only one to spot a suspicious similarity with [Print this twice | Print this twice] or did you, too? Both are program-data documents that provide perfectly yummy recipes for self-reproduction. Life’s but a walking shadow, said Macbeth. Wrong. Life’s but a self-printing iPod! Ministry-of-Virtue officials will bang on preachily about there being more to human life than the blind pursuit of self-replication, a silly notion that Hollywood’s typical fare swats away daily at a theater near you. Existential angst aside, the string “ACGGTATCCGAATGC…” is either plain data (the genes constituting your DNA) or a program whose execution produces, among other things, all the proteins needed for DNA replication, plus all of the others needed for the far more demanding task of sustaining your Cocoa Puffs addiction. Duality is the choice you have to think of your genome either as a long polymer of nucleotides (the data to be read) or as the sequence of amino acids forming its associated proteins (the “programs of life”). Hence the fundamental equation of biology:

Life = Duality + Self-reference
Elementary, my dear Watson!

On April 25, 1953, the British journal Nature published a short article whose understated punchline was the shot heard ’round the world: “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.” In unveiling to the world the molecular structure of DNA, James Watson and Francis Crick broke the Code of Life. In so doing, they laid bare the primordial link between life and computing. One can easily imagine the reaction of that other codebreaker from Bletchley Park: “Duality and self-reference embedded in molecules? Jolly good to know God thinks like me.

Turing’s swagger would have been forgivable. After all, here was the man who had invented the computer. Here was the man who had put the mind-matter question on a scientific footing. Here was the man who had saved Britain from defeat in 1941 by breaking the Nazi code. Alas, good deeds rarely go unpunished. In lieu of a knighthood, a grateful nation awarded Alan Turing a one-way ticket to Palookaville, England: a court conviction for homosexuality with a sentence of forced estrogen injections. On June 7, 1954, barely one year to the day of Watson and Crick’s triumph, Alan Turing went home, ate an apple laced with cyanide, and died. His mother believed, as a mother would, that it was an accident.

The modern era

The post-Turing years saw the emergence of a new computing paradigm: tractability. Its origin lay in the intuitive notion that checking a proof of Archimedes’s theorem can’t be nearly as hard as finding it in the first place; enjoying a coke must be simpler than discovering its secret recipe (or so the Coca Cola Company hopes), falling under the spell of ‘Round Midnight ought to be easier than matching Monk’s composing prowess. But is it really? Amazingly, no one knows. Welcome to the foremost open question in all of computer science!

Ever wondered whether the 1,000-song library stored in your iPod could be reordered and split up to form two equal-time playlists? Probably not. But suppose you wanted to transfer your songs to the two sides of an extra-length cassette while indulging your lifelong passion for saving money on magnetic tape. Which songs would you put on which side so as to use as little tape as possible? Now you’d be wondering, wouldn’t you? (Humor me: say yes.)

First you prove it,
then you let it sink in.

You wouldn’t wonder long, anyway. After a minute’s reflection, you’d realize you didn’t have the faintest idea how to do that. (Warning: splitting a tune in the middle is a no-no.) Of course, you could try all possibilities but that’s a big number—roughly 1 followed by 300 zeroes. Ah, but your amazing friend Alice, she knows! Or so she says. Then why not just get the two playlists from her? By adding up a few numbers, you’ll easily verify that she’s not lying and that, indeed, both lists have the same playing time. What Alice will hand you over is, in essence, a proof that your song library can be split evenly. Your job will be reduced to that of proof-checking, a task at which a compulsive tape-saving Scrooge might even shine. Heads-up: did you notice my nonchalant use of the word “lying”? When a movie’s opening scene casually trains the camera on a gun, no one might get hurt for a while, but you know that won’t last.

Alas, wondrous Alice fell down the rabbit hole eons ago and, these days, a good library splitting friend is hard to find. And so, sadly, you’ll have little choice but to compile the two lists yourself and engage in that dreaded thing called proof-finding. That’s a tougher nut to crack. So much so that even if you were to harness the full power of an IBM Blue Gene/L running the best software available anywhere on earth and beyond, the entire lifetime of the universe wouldn’t be enough! You might get lucky with the parameters and get it done sooner, but getting lucky? Yeah, right…

To add insult to injury, computer scientists have catalogued thousands of such Jurassic problems—so named for the dinosaur-like quality of their solutions: hard to discover but impossible to miss when they pop up in front of you; in other words, proofs hopelessly difficult to find but a breeze to verify. Courtesy of Murphy’s Law, of course, the great Jurassics of the world include all of the hydra-headed monsters we’re so desperate to slay: drug design; protein folding; resource allocation; portfolio optimization; suitcase packing; etc. Furthermore, even shooting for good approximate solutions—when the notion makes sense—can sometimes be just as daunting.

Now a funny thing happened on the way back from the word factory. Despite its dazzling lyricism, metaphorical felicity, hip-hoppish élan, not to mention a Niagara of adulatory gushing I’ll kindly spare you, my staggeringly brilliant coinage “Jurassic” hasn’t caught on. Yet. Skittish computer scientists tend to favor the achingly dull “NP-complete.” Worse, their idea of bustin’ a dope, def funky rhyme is to—get this—write down the thing in full, as in “complete for nondeterministic polynomial time.” To each their own.

Back to the Jurassics. Always basking in the spotlight, they are famously difficult, impossibly hard to satisfy, and—if their resilience is any guide—quite pleased with the attention. These traits often run in the family; sure enough, the Jurassics are blood kin. The first to put them on the analyst’s couch and pin their intractable behavior on consanguinity were Stephen Cook, Jack Edmonds, Richard Karp, and Leonid Levin. In the process they redefined computing around the notion of tractability and produced the most influential milestone in post-Turing computer science.

But what is a tractable problem, you ask? Answer: one that can be solved in polynomial time. Oh, swell; nothing like calling upon the opaque to come to the rescue of the obscure! Relax: it’s quite simple, really. If you double the size of the problem—say, your iPod library will now hold 2,000 tunes instead of a mere 1,000—then the time to find an even split should at most double, or quadruple, or increase by some fixed rate (ie, independent of the problem size). That’s what it means to be tractable. Convoluted as this definition may seem, it has two things going for it: one is to match our intuition of what can be solved in practice (assuming the fixed rate isn’t “fixed” too high); the other is to leave the particular computer we’re working on out of the picture. See how there is no mention of computing speeds; only of growth rates. It is a statement about software, not hardware. Tractability is a universal attribute of a problem—or lack thereof. Note: some scholars prefer the word feasibility. Obviously, to resist the lure of the opening riff of Wittgenstein’s “Tractatus Logico-Philosophicus” takes willpower; predictably, the feasibility crowd is thin.

What do you mean, ‘intractable’ ?

Library splitting does not appear to be tractable. (Hold the tears: you’ll need them in a minute.) Any algorithm humans have ever tried—and many have—requires exponential time. Read: all of them share the dubious distinction that their running times get squared (not merely scaled up by a constant factor) whenever one doubles the size of the problem. If you do the math, you’ll see it’s the sort of growth that quickly gets out of hand.

Well, do the math. Say you want to solve a problem that involves 100 numbers and the best method in existence takes one second on your laptop. How long would it take to solve the same problem with 200 numbers, instead? Answer: just a few seconds if it’s tractable; and C× 2200 = (C× 2100)2100 = 2100 seconds if it’s not. That’s more than a billion trillion years! To paraphrase Senator Dirksen from the great State of Illinois, a trillion years here, a trillion years there, and pretty soon you’re talking real time. Exponentialitis is not a pretty condition. Sadly, it afflicts the entire Jurassic menagerie.

The true nature of the ailment eludes us but this much we know: it’s genetic. If any one of the Jurassics is tractable, wonder of wonders, all of them are. Better still: a cure for any one of them could easily be used to heal any of the others. Viewed through the tractability lens, the Jurassics are the same T. rex in different brontosaurus’ clothings. Heady stuff! The day Alice can split your song library within a few hours will be the day biologists can fold proteins over breakfast, design new drugs by lunch, and eradicate deadly diseases just in time for dinner. The attendant medical revolution will likely make you live the long, jolly life of a giant Galápagos tortoise (life span: 150 years). Alice’s discovery would imply the tractability of all the Jurassics (P=NP in computer lingo). Should the computing gods smile upon us, the practical consequences could be huge.

Granted, there would be a few losers: mostly online shoppers and mathematicians. All commercial transactions on the Internet would cease to be secure and e-business would grind to a halt. (More on this gripping drama in the next section.) The math world would take a hit, too: P=NP would prove Andrew Wiles, the conqueror of Fermat’s Last Theorem, no more deserving of credit than his referee. Well, not quite. Mathematicians like to assign two purposes to a proof: one is to convince them that something is true; the other is to help them understand why something is true. Tractability bears no relevance to the latter. Still, no one wants to see friendly mathematicians swell the ranks of the unemployed as they get replaced by nano iPods, so the consensus has emerged that P is not NP. There are other reasons, too, but that one is the best because it puts computer scientists in a good light. The truth is, no one has a clue.

To be P or not to be P, that is NP’s question. A million-dollar question, in fact. That’s how much prize money the Clay Mathematics Institute will award Alice if she resolves the tractability of library splitting. (She will also be shipped to Guantánamo by the CIA, but that’s a different essay.) Which side of the NP question should we root for? We know the stakes: a short existence blessed with online shopping (P≠NP); or the interminable, eBay-less life of a giant tortoise (P=NP). Tough call.

P=NP   (Or why you won’t find the proof on eBay)

An algorithm proving P=NP might conceivably do for technology what the discovery of the wheel did for land transportation. Granted, to discover the wheel is always nice, but to roll logs in the mud has its charms, too. Likewise, the intractability of proof-finding would have its benefits. That 1951 vintage Wham-O hula hoop you bought on eBay the other day, er, you didn’t think the auction was secure just because online thieves were too hip for hula hoops, did you? What kept them at bay was the (much hoped-for) intractability of integer factorization.

Say what? Prime numbers deterring crooks? Indeed. Take two primes, S and T, each one, say, a thousand-digit long. The product R= S × T is about 2,000 digits long. Given S and T, your laptop will churn out R in a flash. On the other hand, if you knew only R, how hard would it be for you to retrieve S and T? Hard. Very hard. Very very hard. Repeat this until you believe it because the same algorithm that would find S and T could be used to steal your credit card off the Internet!

Cryptology will help you
win wars and shop online

Am I implying that computer security is premised on our inability to do some silly arithmetic fast enough? I surely am. If the Jurassics were shown to be tractable, not a single computer security system would be safe. Which is why for eBay to auction off a proof of P=NP would be suicidal. Worse: factoring is not even known—or, for that matter, thought—to be one of the Jurassics. It could well be a cuddly pet dinosaur eager to please its master (if only its master had the brains to see that). One cannot rule out the existence of a fast factoring algorithm that would have no incidence on the P=NP question.

In fact, such an algorithm exists. All of the recent hoopla about quantum computing owes to the collective panic caused by Peter Shor’s discovery that factoring is tractable on a quantum iPod. That building the thing itself is proving quite hopeless has helped to calm the frayed nerves of computer security experts. And yet there remains the spine-chilling possibility that maybe, just maybe, factoring is doable in practice on a humble laptop. Paranoid security pros might want to hold on to their prozac a while longer.

Cryptology is a two-faced Janus. One side studies how to decrypt the secret messages that bad people exchange among one another. That’s cryptanalysis: think Nazi code, Bletchley Park, victory parade, streamers, confetti, sex, booze, and rock ‘n’ roll. The other branch of the field, cryptography, seeks clever ways of encoding secret messages for good people to send to other good people, so that bad people get denied the streamers, the confetti, and all the rest. Much of computer security relies on public-key cryptography. The idea is for, say, eBay to post an encryption algorithm on the web that everybody can use. When you are ready to purchase that hula hoop, you’ll type in your credit card information into your computer, encrypt it right there, and then send the resulting gobbledygook over the Internet. Naturally, the folks at eBay will need their own secret decryption algorithm to make sense of the junk they’ll receive from you. (Whereas poor taste is all you’ll need to make sense of the junk you’ll receive from them.) The punchline is that no one should be able to decrypt anything unless they have that secret algorithm in their possession.

Remember, guys, not a word about
our factoring algorithm, okay?

Easier said than done. Consider the fiendishly clever algorithm that encodes the first two words of this sentence as dpotjefs uif. So easy to encrypt: just replace each letter in the text by the next one in the alphabet. Now assume you knew this encryption scheme. How in the world would you go about decrypting a message? Ah, this is where algorithmic genius kicks in. (Algorithmistas get paid the big bucks for a reason.) It’s a bit technical so I’ll write slowly: replace each letter in the ciphertext by the previous one in the alphabet. Ingenious, no? And fast, too! The only problem with the system is that superior minds can crack it. So is there a cryptographic scheme that is unbreakable, irrespective of how many geniuses roam the earth? It should be child’s play to go one way (encrypt) but a gargantuan undertaking to go back (decrypt)—unless, that is, one knows the decryption algorithm, in which case it should be a cinch.

RSA, named after Ron Rivest, Adi Shamir, and Len Adleman, is just such a scheme. It’s an exceedingly clever, elegant public-key cryptosystem that, amazingly, requires only multiplication and long division. It rules e-commerce and pops up in countless security applications. Its universal acclaim got its inventors the Turing award (the “Nobel prize” of computer science). More important, it got Rivest a chance to throw the ceremonial first pitch for the first Red Sox-Yankees game of the 2004 season. Yes, RSA is that big! There is one catch, though (pun intended): if factoring proves to be tractable then it’s bye-bye RSA, hello shopping mall.

The computational art of persuasion

Isn’t intractability just a variant of undecidability, the mother’s milk of logicians? One notion evokes billions of years, the other eternity—what’s the difference? Whether the execution of [program | data] ever terminates is undecidable. In other words, one cannot hope to find out by writing another program and reading the output of [another program | [program | data]]. Of side interest, note how the whole document [program | data] is now treated as mere data: an artful cadenza from Maestro Turing.

Very nice, but how’s undecidability helping us go through life with a smile on our face? It doesn’t. In fact, no one ever tried to benefit from an undecidable problem who didn’t wind up slouched face down on the Viennese gentleman’s couch. Not so with intractable problems. Just as quantum mechanics shattered the platonic view of a reality amenable to noninvasive observation, tractability has clobbered classical notions of identity, randomness, and knowledge. And that’s a good thing.

Why? Let me hereby declare two objects to be “identical” if to tell them apart is intractable, regardless of how different they might actually be. A deck of cards will be “perfectly” shuffled if it’s impossible to prove it otherwise in polynomial time. It is one of the sweet ironies of computing that the existence of an intractable world out there makes our life down here so much easier. Think of it as the Olympics in reverse: if you can’t run the 100-meter dash under 10 seconds, you win the gold!

Scientists of all stripes are insatiable consumers of random numbers: try taking a poll, conducting clinical trials, or running a lottery without them! To produce randomness can be quite arduous. To this day, only two methods have been scientifically validated. One of them is the infamous “Kitty Flop.” Strap buttered toast to the back of a cat and drop the animal from a PETA-approved height: if the butter hits the ground, record a 1; else a 0. For more bits, repeat. Randomness results from the tension between Murphy’s law and the feline penchant for landing on one’s feet. The other method is the classical “Coriolis Flush.” This time, go to the equator and flush the toilet: if the water whirls clockwise, your random bit is a 1; else it’s a 0.

Now think how much easier it’d be if cheating were allowed. Not even bad plumbing could stop you (though many hope it would). Okay, your numbers are not truly random and your cards are not properly shuffled, but if to show they are not is intractable then why should you care? Hardness creates easiness. Of course, computer scientists have simply rediscovered what professional cyclists have known for years: the irresistible lure of intractability (of drug detection).

You’re not thinking, I hope, that this is all perched on the same moral high ground as Don Corleone’s philosophy that crime is not breaking the law but getting caught. If you are, will you please learn to think positive? Our take on intractability is really no different from the 1894 Supreme Court decision in Coffin vs US that introduced to American jurisprudence the maxim “Innocent until proven guilty.” Reality is not what is but what can be proven to be (with bounded patience). If you think this sort of tractability-induced relativism takes us down the garden path, think again. It actually cleanses classical notions of serious defects.

Take knowledge, for example: here’s something far more faith-based than we’d like to admit. We “know” that the speed of light is constant, but who among us has actually bothered to measure it? We know because we trust. Not all of us have that luxury. Say you’re a fugitive from the law. (Yes, I know, your favorite metaphor.) The authorities don’t trust you much and—one can safely assume—the feeling is mutual. How then can you convince the police of your innocence? Reveal too little and they won’t believe you. Reveal too much and they’ll catch you. Intractability holds the key to the answer. And the Feds hold the key to my prison cell if I say more. Sorry, nothing to see here, move along.

Fresh, juicy primes!

Years have passed and you’ve traded your fugitive’s garb for the funky duds of a math genius who’s discovered how to factor integers in a flash. Sniffing a business opportunity, you offer to factor anybody’s favorite number for a small fee. There might be a huge market for that, but it’s less clear there’s nearly enough gullibility around for anyone to take you up on your offer—especially with your mugshot still hanging in the post office. No one is likely to cough up any cash unless they can see the prime factors. But then why would you reward such distrust by revealing the factors in the first place? Obviously, some confidence-building is in order.

What will do the trick is a dialogue between you and the buyer that persuades her that you know the factors, all the while leaking no information about them whatsoever. Amazingly, such an unlikely dialogue exists: for this and, in fact, for any of our Jurassics. Alice can convince you that she can split up your iPod library evenly without dropping the slightest hint about how to do it. (A technical aside: this requires a slightly stronger intractability assumption than P≠NP.) Say hello to the great zero-knowledge (ZK) paradox: a congenital liar can convince a hardened skeptic that she knows something without revealing a thing about it. ZK dialogues leave no option but for liars to tell the truth and for doubting Thomases to believe. They render dishonesty irrelevant, for trusting comes naturally to a society where all liars get caught.

What’s intractability got to do with it? Everything. If factoring were known to be tractable, the buyer would need no evidence that you could factor: she could just do it herself and ignore your services—bakers don’t buy bread. At this point, the reader might have a nagging suspicion of defective logic: if factoring is so hard, then who’s going to be the seller? Superman? In e-commerce applications, numbers to be factored are formed by multiplying huge primes together. In this way, the factors are known ahead of time to those privy to this process and live in intractability limboland for all others.

The book of zero-knowledge

It gets better. Not only can two parties convince each other of their respective knowledge without leaking any of it; they can also reason about it. Two businessmen get stuck in an elevator. Naturally, a single thought runs through their minds: finding out who’s the wealthier. Thanks to ZK theory, they’ll be able to do so without revealing anything about their own worth (material worth, that is—the other kind is already in full view).

Feel the pain of two nuclear powers, Learsiland and Aidniland. Not being signatories to the Nuclear Non-Proliferation Treaty, only they know the exact size of their nuclear arsenals (at least one hopes they do). Computing theory would allow Learsiland to prove to Aidniland that it outnukes it without leaking any information about its deterrent’s strength. The case of Nariland is more complex: it only wishes to demonstrate compliance with the NPT (which it’s signed) without revealing any information about its nuclear facilities. While these questions are still open, they are right up ZK ‘s alley. Game theorists made quite a name for themselves in the Cold War by explaining why the aptly named MAD strategy of nuclear deterrence was not quite as mad as it sounded. Expect zero-knowledgists to take up equally daunting “doomsday” challenges in the years ahead. And, when they do, get yourself a large supply of milk and cookies, a copy of Kierkegaard’s “Fear and Trembling,” and unrestricted access to a deep cave.

More amazing than ZK still is this thing called PCP (for “probabilistically checkable proofs; not for what you think). For a taste of it, consider the sociological oddity that great unsolved math problems seem to attract crackpots like flypaper. Say I am one of them. One day I call the folks over at the Clay Math Institute to inform them that I’ve just cracked the Riemann hypothesis (the biggest baddest beast in the math jungle). And could they please deposit my million-dollar check into my Nigerian account presto? Being the gracious sort, Landon and Lavinia Clay indulge me with a comforting “Sure,” while adding the perfunctory plea: “As you know, we’re a little fussy about the format of our math proofs. So please make sure yours conforms to our standards—instructions available on our web site, blah, blah.” To my relief, that proves quite easy—even with that damn caps lock key stuck in the down position—and the new proof is barely longer than the old one. Over at Clay headquarters, meanwhile, no one has any illusions about me (fools!) but, bless the lawyers, they’re obligated to verify the validity of my proof.

Gotta run. Let’s try PCP !

To do that, they’ve figured out an amazing way, the PCP way. It goes like this: Mr and Mrs Clay will pick four characters from my proof at random and throw the rest in the garbage without even looking at it. They will then assemble the characters into a four-letter word and read it out loud very slowly—it’s not broadcast on American TV, so it’s okay. Finally, based on that word alone, they will declare my proof valid or bogus. The kicker: their conclusion will be correct! Granted, there’s a tiny chance of error due to the use of random numbers, but by repeating this little game a few times they can make a screwup less likely than having their favorite baboon type all of Hamlet in perfect Mandarin.

At this point, no doubt you’re wondering whether to believe this mumbo-jumbo might require not only applying PCP but also smoking it. If my proof is correct, I can see how running it through the Clays’ gauntlet of checks and tests would leave it unscathed. But, based on a lonely four-letter word, how will they know I’ve cracked Riemann’s hypothesis and not a baby cousin, like the Riemann hypothesis for function fields, or a baby cousin’s baby cousin like 1+1=2? If my proof is bogus (perish the thought) then their task seems equally hopeless. Presumably, the formatting instructions are meant to smear any bug across the proof so as to corrupt any four letters picked at random. But how can they be sure that, in order to evade their dragnet, I haven’t played fast and loose with their silly formatting rules? Crackpots armed with all-caps keyboards will do the darndest thing. Poor Mr and Mrs Clay! They must check not only my math but also my abidance by the rules. So many ways to cheat, so few things to check.

When Abu’s light was
shining on Baghdad

PCP is the ultimate lie-busting device. Why ultimate? Because it is instantaneous and foolproof. The time-honored approach to truth finding is the court trial, where endless questioning between two parties, each one with good reasons to lie, leads to the truth or to a mistrial, but never to an erroneous judgment (yes, I know). PCP introduces the instant-trial system. Once the case has been brought before the judge, it is decided on the spot after only a few seconds of cross-examination. Justice is fully served; and yet the judge will go back to her chamber utterly clueless as to what the case was about. PCP is one of the most amazing algorithms of our time. It steals philosophy’s thunder by turning on its head basic notions of evidence, persuasion, and trust. Somewhere, somehow, Ludwig the Tractatus Man is smiling.

To say that we’re nowhere near resolving P vs NP is a safe prophecy. But why? There are few mysteries in life that human stupidity cannot account for, but to blame the P=NP conundrum on the unbearable lightness of our addled brains would be a cop-out. Better to point the finger at the untamed power of the Algorithm—which, despite rumors to the contrary, was not named after Al Gore but after Abū ‘Abd Allāh Muhammad ibn Mūsā al-Khwārizmī. As ZK and PCP demonstrate, tractability reaches far beyond the racetrack where computing competes for speed. It literally forces us to think differently. The agent of change is the ubiquitous Algorithm. Let’s look over the horizon where its disruptive force beckons, shall we?

Thinking algorithmically

Algorithms are often compared to recipes. As clichés go, a little shopworn perhaps, but remember: no metaphor that appeals to one’s stomach can be truly bad. Furthermore, the literary analogy is spot-on. Algorithms are—and should be understood as—works of literature. The simplest ones are short vignettes that loop through a trivial algebraic calculation to paint fractals, those complex, pointillistic pictures much in vogue in the sci-fi movie industry. Just a few lines long, these computing zingers will print the transcendental digits of π, sort huge sets of numbers, model dynamical systems, or tell you on which day of the week your 150th birthday will fall (something whose relevance we’ve already covered). Zingers can do everything. For the rest, we have, one notch up on the sophistication scale, the sonnets, ballads, and novellas of the algorithmic world. Hiding behind their drab acronyms, of which RSA, FFT, SVD, LLL, AKS, KMP, and SVM form but a small sample, these marvels of ingenuity are the engines driving the algorithmic revolution currently underway. (And, yes, you may be forgiven for thinking that a computer geek’s idea of culinary heaven is a nice big bowl of alphabet soup.) At the rarefied end of the literary range, we find the lush, complex, multilayered novels. The Algorithmistas’ pride and joy, they are the big, glorious tomes on the coffee table that everyone talks about but only the fearless read.

  ‘ fetch branch push load store jump fetch… ’
Who writes this crap?

Give it to them, algorithmic zingers know how to make a scientist swoon. No one who’s ever tried to calculate the digits of π by hand can remain unmoved at the sight of its decimal expansion flooding a computer screen like lava flowing down a volcano. Less impressive perhaps but just as useful is this deceptively simple data retrieval technique called binary search, or BS for short. Whenever you look up a friend’s name in the phone book, chances are you’re using a variant of BS—unless you’re the patient type who prefers exhaustive search (ES) and finds joy in combing through the directory alphabetically till luck strikes. Binary search is exponentially (ie, incomparably) faster than ES. If someone told you to open the phone book in the middle and check whether the name is in the first or second half; then ordered you to repeat the same operation in the relevant half and go on like that until you spotted your friend’s name, you would shoot back: “That’s BS!”

Well, yes and no. Say your phone book had a million entries and each step took one second: BS would take only twenty seconds but ES would typically run for five days! Five days?! Imagine that. What if it were an emergency and you had to look up the number for 911? (Yep, there’s no low to which this writer won’t stoop.) The key to binary search is to have an ordered list. To appreciate the relevance of sorting, suppose that you forgot the name of your friend (okay, acquaintance) but you had her number. Since the phone numbers typically appear in quasi-random order, the name could just be anywhere and you’d be stuck with ES. There would be two ways for you to get around this: to be the famous Thomas Magnum and bribe the Honolulu police chief to get your hands on the reverse directory; or to use something called a hash table: a key idea of computer science.

Hash table? Hmm, I know what you’re thinking: Algorithmistas dig hash tables; they’re down for PCP; they crack codes; they get bent out of shape by morphin’; they swear by quicksnort (or whatever it’s called). Coincidence? Computer scientists will say yes, but what else are they supposed to say?

Algorithms for searching the phone book or spewing out the digits of π are race horses: their sole function is to run fast and obey their masters. Breeding Triple Crown winners has been high on computer science’s agenda—too high, some will say. Blame this on the sheer exhilaration of the sport. Algorithmic racing champs are creatures of dazzling beauty, and a chance to breed them is a rare privilege. That said, whizzing around the track at lightning speed is not the be-all and end-all of algorithmic life. Creating magic tricks is just as highly prized: remember RSA, PCP, ZK. The phenomenal rise of Google’s fortunes owes to a single algorithmic gem, PageRank, leavened by the investing exuberance of legions of believers. To make sense of the World Wide Web is algorithmic in a qualitative sense. Speed is a secondary issue. And so PageRank, itself no slouch on the track, is treasured for its brains, not its legs.

Hold on! To make sense of the world, we have math. Who needs algorithms? It is beyond dispute that the dizzying success of 20th century science is, to a large degree, the triumph of mathematics. A page’s worth of math formulas is enough to explain most of the physical phenomena around us: why things fly, fall, float, gravitate, radiate, blow up, etc. As Albert Einstein said, “The most incomprehensible thing about the universe is that it is comprehensible.” Granted, Einstein’s assurance that something is comprehensible might not necessarily reassure everyone, but all would agree that the universe speaks in one tongue and one tongue only: mathematics.

Don’t google us, we’ll google you.

But does it, really? This consensus is being challenged today. As young minds turn to the sciences of the new century with stars in their eyes, they’re finding old math wanting. Biologists have by now a pretty good idea of what a cell looks like, but they’ve had trouble figuring out the magical equations that will explain what it does. How the brain works is a mystery (or sometimes, as in the case of our 43rd president, an overstatement) whose long, dark veil mathematics has failed to lift.

Economists are a refreshingly humble lot—quite a surprise really, considering how little they have to be humble about. Their unfailing predictions are rooted in the holy verities of higher math. True to form, they’ll sheepishly admit that this sacred bond comes with the requisite assumption that economic agents, also known as humans, are benighted, robotic dodos—something which unfortunately is not always true, even among economists.

A consensus is emerging that, this time around, throwing more differential equations at the problems won’t cut it. Mathematics shines in domains replete with symmetry, regularity, periodicity—things often missing in the life and social sciences. Contrast a crystal structure (grist for algebra’s mill) with the World Wide Web (cannon fodder for algorithms). No math formula will ever model whole biological organisms, economies, ecologies, or large, live networks. Will the Algorithm come to the rescue? This is the next great hope. The algorithmic lens on science is full of promise—and pitfalls.

First, the promise. If you squint hard enough, a network of autonomous agents interacting together will begin to look like a giant distributed algorithm in action. Proteins respond to local stimuli to keep your heart pumping, your lungs breathing, and your eyes glued to this essay—how more algorithmic can anything get? The concomitance of local actions and reactions yielding large-scale effects is a characteristic trait of an algorithm. It would be naive to expect mere formulas like those governing the cycles of the moon to explain the cycles of the cell or of the stock market.

Contrarians will voice the objection that an algorithm is just a math formula in disguise, so what’s the big hoopla about? The answer is: yes, so what? The issue here is not logical equivalence but expressibility. Technically, number theory is just a branch of set theory, but no one thinks like that because it’s not helpful. Similarly, the algorithmic paradigm is not about what but how to think. The issue of expressiveness is subtle but crucial: it leads to the key notion of abstraction and is worth a few words here (and a few books elsewhere).

Remember the evil Brazilian butterfly? Yes, the one that idles the time away by casting typhoons upon China with the flap of a wing. This is the stuff of legend and tall tales (also known as chaos theory). Simple, zinger-like algorithms model this sort of phenomenon while neatly capturing one of the tenets of computing: the capacity of a local action to unleash colossal forces on a global scale; complexity emerging out of triviality.

Al-Khwarizmi takes wing

Create a virtual aviary of simulated geese and endow each bird with a handful of simple rules: (1) Spot a flock of geese? Follow its perceived center; (2) Get too close to a goose? Step aside; (3) Get your view blocked by another goose? Move laterally away from it; etc. Release a hundred of these critters into the (virtual) wild and watch a distributed algorithm come to life, as a flock of graceful geese migrate in perfect formation. Even trivial rules can produce self-organizing systems with patterns of behavior that look almost “intelligent.” Astonishingly, the simplest of algorithms mediate that sort of magic.

The local rules of trivial zingers carry enough punch to produce complex systems; in fact, by Church-Turing universality, to produce any complex system. Obviously, not even algorithmic sonnets, novellas, or Homeric epics can beat that. So why bother with the distinction? Perhaps for the same reason the snobs among us are loath to blur the difference between Jay Leno and Leo Tolstoy. But isn’t “War and Peace” just an endless collection of one-liners? Not quite. The subtlety here is called abstraction. Train your binoculars on a single (virtual) goose in flight and you’ll see a bird-brained, rule-driven robot flying over Dullsville airspace. Zoom out and you’ll be treated to a majestic flock of birds flying in formation. Abstraction is the ability to choose the zoom factor. Algorithmic novels allow a plethora of abstraction levels that are entirely alien to zingers.

Take war, for example. At its most basic, war is a soldier valiantly following combat rules on the battlefield. At a higher level of abstraction, it is a clash of warfare strategies. Mindful of Wellington’s dictum that Waterloo was won on the playing fields of Eton (where they take their pillow fighting seriously), one might concentrate instead on the schooling of the officer corps. Clausewitz devotees who see war as politics by other means will adjust the zoom lens to focus on the political landscape. Abstraction can be vertical: a young English infantryman within a platoon within a company within a battalion within a regiment within a mass grave on the banks of the Somme.

Or it can be horizontal: heterogeneous units interacting together within an algorithmic “ecology.” Unlike zingers, algorithmic novels are complex systems in and of themselves. Whereas most of what a zinger does contributes directly to its output, the epics of the algorithmic world devote most of their energies to servicing their constituent parts via swarms of intricate data structures. Most of these typically serve functions that bear no direct relevance to the algorithm’s overall purpose—just as the mRNA of a computer programmer rarely concerns itself with the faster production of Java code.

The parallel with biological organisms is compelling but far from understood. To this day, for example, genetics remains the art of writing the captions for a giant cartoon strip. Molecular snapshots segue from one scene to the next through plots narrated by circuit-like chemical constructs—zingers, really—that embody only the most rudimentary notions of abstraction. Self-reference is associated mostly with self-replication. In the algorithmic world, by contrast, it is the engine powering the complex recursive designs that give abstraction its amazing richness: it is, in fact, the very essence of computing. Should even a fraction of that power be harnessed for modeling purposes in systems biology, neuroscience, economics, or behavioral ecology, there’s no telling what might happen (admittedly, always a safe thing to say). To borrow the Kuhn cliché, algorithmic thinking could well cause a paradigm shift. Whether the paradigm shifts, shuffles, sashays, or boogies its way into the sciences, it seems destined to make a lasting imprint.

Had Newton been hit by a flying
goose and not a falling apple…

Now the pitfalls. What could disrupt the rosy scenario we so joyfully scripted? The future of the Algorithm as a modeling device is not in doubt. For its revolutionary impact to be felt in full, however, something else needs to happen. Let’s try a thought experiment, shall we? You’re the unreconstructed Algorithm skeptic. Fresh from splitting your playlist, Alice, naturally, is the advocate. One day, she comes to you with a twinkle in her eye and a question on her mind: “What are the benefits of the central law of mechanics?” After a quick trip to Wikipedia to reactivate your high school physics neurons and dust off the cobwebs around them, you reply that F=ma does a decent job of modeling the motion of an apple as it is about to crash on Newton’s head: “What’s not to like about that?” “Oh, nothing,” retorts Alice, “except that algorithms can be faithful modelers, too; they’re great for conducting simulations and making predictions.” Pouncing for the kill, she adds: “By the way, to be of any use, your vaunted formulas will first need to be converted into algorithms.” Touché.

Ahead on points, Alice’s position will dramatically unravel the minute you remind her that F=ma lives in the world of calculus, which means that the full power of analysis and algebra can be brought to bear. From F=ma, for example, one finds that: (i) the force doubles when the mass does; (ii) courtesy of the law of gravity, the apple’s position is a quadratic function of time; (iii) the invariance of Maxwell’s equations under constant motion kills F=ma and begets the theory of special relativity. And all of this is done with math alone! Wish Alice good luck trying to get her beloved algorithms to pull that kind of stunt. Math gives us the tools for doing physics; more important, it gives us the tools for doing math. We get not only the equations but also the tools for modifying, combining, harmonizing, generalizing them; in short, for reasoning about them. We get the characters of the drama as well as the whole script!

Is there any hope for a “calculus” of algorithms that would enable us to knead them like Play-Doh to form new algorithmic shapes from old ones? Algebraic geometry tells us what happens when we throw in a bunch of polynomial equations together. What theory will tell us what happens when we throw in a bunch of algorithms together? As long as they remain isolated, free-floating creatures, hatched on individual whims for the sole purpose of dispatching the next quacking duck flailing in the open-problems covey, algorithms will be parts without a whole; and the promise of the Algorithm will remain a promise deferred.

While the magic of algorithms has long held computing theorists in its thrall, their potential power has been chronically underestimated; it’s been the life story of the field, in fact, that they are found to do one day what no one thought them capable of doing the day before. If proving limitations on algorithms has been so hard, maybe it’s because they can do so much. Algorithmistas will likely need their own “Google Earth” to navigate the treacherous canyons of Turingstan and find their way to the lush oases amid the wilderness. But mark my words: the algorithmic land will prove as fertile as the one the Pilgrims found in New England and its settlement as revolutionary.

Truth be told, the 1776 of computing is not quite upon us. If the Algorithm is the New World, we are still building the landing dock at Plymouth Rock. Until we chart out the vast expanses of the algorithmic frontier, the P vs NP mystery is likely to remain just that. Only when the Algorithm becomes not just a body but a way of thinking, the young sciences of the new century will cease to be the hapless nails that the hammer of old math keeps hitting with maniacal glee.

One thing is certain. Moore’s Law has put computing on the map: the Algorithm will now unleash its true potential. That’s one prediction Lord Kelvin never made, so you may safely trust the future to be kind to it.

May the Algorithm’s Force be with you.

MCB Love to Mention : )

Content Courtesy →

The Algorithm: Idiom of Modern Science
Have A Views ?
Pay A Visit :

MCB-Rhythm Of Algorithm

How useful was this post?

Click on a star to rate it!


What is an Algorithm in Computer Science?

An algorithm, is a term used in the field of Computer Science, to define a set of rules or processes for solving a particular problem in a finite number of steps. Its most important feature is that all the rules and operations must be well defined and free of ambiguity. It usually consists of mathematical equations with inequalities that follow decision branches. These ordered sequences of steps must always provide the correct answer to a problem every time. Just as there is more than one approach to solving any particular problem, there can be more than one algorithm for solving a problem. Some algorithms are more efficient than others are because they are able to find the solution quicker. An implementation of an algorithm is usually a computer program consisting of procedures made of commands; however, a computer program is not an algorithm.

For example, here is a famous set of steps that most students remember their teacher writing at university.

  1. Count from N=99 down to 0;
  2. N bottles of beer on the wall, N bottles of beer;
  3. take one down;
  4. pass it around;
  5. N-1 bottles of beer on the wall;
  6. Next Count;

The use of algorithms very likely began as a tool for remembering mathematics because very early mathematics did not use equations. The earliest evidence of man using algorithms was in ancient India, where written scripts show simple mathematical steps being utilised to solve more complex problems. Consequently, almost every civilisation on the Asian continent knew the steps required to solve the famous quadratic equation. Once we discover the steps required for solving any particular problem, anybody can replicate the same results by following the same steps, and knowledge of the underlying principle is no longer necessary.

Algorithms are natural to humans and we use them in everyday life. Perhaps it goes back to our earliest ancestors, the chimps, who learnt that repeatedly hitting a nut with a big rock would crack it. These simple set of unambiguous steps always worked resulting in a reward of a nut.

Whilst cracking a nut appears simple and we would not even give it second thought, the algorithm and thought process would have been very advanced.

  1. Find a large rock with a flat surface
  2. Find a nut
  3. Place nut on top of the rock
  4. Find a small rock
  5. Hit nut with small rock
  6. Did nut crack?
  7. If nut did crack, then discard broken shell, and eat inner part
  8. If nut did not crack repeat step 5

Today we use recipes, which are steps that show how to make complex French cuisine. When driving long distances, we make a list of roads and the sequence to expect them.

Computer Science

MCB Love to Mention : )

Content Courtesy →

What is an Algorithm in Computer Science?
Have A Views ?
Pay A Visit :

MCB-Rhythm Of Algorithm

How useful was this post?

Click on a star to rate it!

✐Publish News,

| ✐ Post Your News, Views, Consciences, Etc. ABOUT/FROM/ON ‘Rhythm Of Algorithm’ at MCB 💖

. . .

No SignUp ! Just LogIn with our one’n only open online profile WerMCBzen ! , Make New Post– Search & Select Category/Tag – ‘Rhythm Of Algorithm’ & Publish All News, Views, Consciences, Etc. ABOUT/FROM/ON ‘Rhythm Of Algorithm’ at MCB ! 
*please type in small letters(e.g rhythm of algorithm) at Category/Tag box for searching.

*Search & Select  your Keyword: ‘Rhythm Of Algorithm’ at Category/Tag box to Make New Post .
*please type in small letters(e.g rhythm of algorithm) at Category/Tag box for searching.

*Custom URL for MCB Page:  Rhythm Of Algorithm :

You may suggest people the permanent custom url below for this MCB Page:  Rhythm Of Algorithm :

MCB Page: Rhythm Of Algorithm


<blockquote class=”wp-embedded-content” data-secret=”zBo908V0UY”><a href=””>MCB-RhythmOfAlgorithm</a></blockquote><iframe class=”wp-embedded-content” sandbox=”allow-scripts” security=”restricted” style=”position: absolute; clip: rect(1px, 1px, 1px, 1px);” title=”“MCB-RhythmOfAlgorithm” — MCB” src=”” data-secret=”zBo908V0UY” width=”600″ height=”338″ frameborder=”0″ marginwidth=”0″ marginheight=”0″ scrolling=”no”></iframe>

How useful was this post?

Click on a star to rate it!