see (verb)—Perceive with the eyes; discern visually.
Can we teach a computer to see?
Three components are necessary for sight to happen. First, you need an eye, something to detect light. In the mechanical world, a camera would serve this function.
Second, you need a brain to decode the messages being passed along the optic nerve. A computer with a software package can accomplish this.
Third (and this is the most critical), you need cognition—that is, the ability to understand what is being seen. For instance, when we see a yellow car with a small sign on top and the letters T-A-X-I on both sides, we know that is a taxi. Essentially, our brains take in bits of data and render an opinion based on experience.
A computer “sees” in pixels. A pixel is a single unit of color. Of course, computers cannot grasp the concept of color like a human can, they only see each pixel as a bit of data. For example: white = 1, red = 2, blue = 3, yellow = 4, and on and on, with the computer assigning every color it encounters a specific value. The computer then, essentially, creates a map of values. Since the image is now rendered as data, a computer can “understand” it. This is how photo-viewing software works. An image is converted into data and the computer tells your monitor what color each pixel has to be to render the image exactly as it was when the photo was taken. The more pixels the camera was capable of capturing, and the more pixels your monitor has to display the image, the sharper and more detailed the image appears.
Programming a computer to recognize a coin is not impossible. These days, it is not even that difficult. The technology that allows a machine to recognize objects already exists—have you ever had Facebook ask you to tag people in a photo? or used a funny filter on Snapchat? Facial-recognition software has been around for years, and the same principles could easily be applied to coins.
Getting a computer to identify a coin as a coin will be simple. Getting a computer to differentiate between coin types will be slightly more difficult. It all comes down to the amount of detail the device can detect. In 1999, the best consumer cameras could record 2 megapixels, which was not enough to do the job. Today, an iPhone 6 comes equipped with a camera that can record 12 megapixels—that is, 12 million pixels. A Canon EOS 5D Mark III is a 22.3-megapixel camera. So, detecting detail will not be a problem. The technology already exists to capture it.
The computer will have to be programmed to measure the coin and its features, including different features’ distance from each other, to determine the type. That data could be combined with the coin’s weight to get an accurate result.
So far, we have built a computer that can recognize a coin when it “sees” one.
Now, comes the real challenge, how do we teach it to evaluate a coin? The same way you get to Carnegie Hall: practice, practice, practice.
I am talking about neural networks. A neural network operates by presenting a problem to the software—in our case, showing the computer a specific coin of a particular grade. The computer analyzes the problem and computes an answer. For us, the answer would be the coin’s grade. The programmer then tells the computer whether its answer is correct or incorrect. If correct, the computer waits for the next problem. If incorrect, the programmer indicates the correct answer and the computer adjusts its standards and tries again on the next problem. The more repetitions and adjustments the computer gets, the “smarter” it becomes. With enough repetitions with multiple coins of every type and grade, a neural network should enable a computer to “learn” to grade coins.
Shazam! We have the ability to teach a computer to authenticate and grade coins. But…
The computer will only ever be able to grade as well as the people programming it. Since all coin grades are really just opinions, and opinions differ among individual graders, who gets to program the computer?
This is a tough question. Most likely the computer will be calibrated by whoever has the guts, and the funding (more on this next time), to create it. Ideally, this will be a group of people with numismatic backgrounds who can collectively teach the system to grade.
There is also the issue of what set of grading standards to use. Should the computer grade to the conservative standards of the 1980s or the slightly looser standards of today? No one is going to crack out their coins for a probable downgrade. And this is where business interests will meet numismatic ethics: does the company behind the computer teach it to grade in a way that will not maximize profits?
In the next part I will discuss the grading company behind the computer and how it will probably start. ❒
Kendall Bailey runs TheCoinBlog.net.