It has elegance as a martial
art and poise as a sport.
It
is recognised at the Olympics and has about 100 million practitioners across more than 200 countries,
making it one of the world’s most popular sports.
But with some recent controversies
hitting the Singapore governing body hard –
such as the Singapore Taekwondo Federation (STF) being suspended by world
governing body World Taekwondo in May 2019, and two senior STF officials being
found to have breached international body’s Code of Ethics – there has been much
coffee-shop talk regarding the future of the sport here.
What
can we do to bring the sport back to its glory days?
High-performance sports models around the world include the United States’ NFL (National Football League) and NBA (National Basketball Association), international soccer clubs, world-class Olympic schools and high-profile gyms such as Evolve MMA. All have a pedagogy that works and provide lifetime careers. We are talking well-established businesses.
Can we do the same with taekwondo? As with sports or art, the acquisition and retention of customers is a business question. There must be a value associated with the regular consumption of the sport.
The governing body needs
revitalisation and a new focus to keep up with the modern and tech-savvy
generation. Here is a 10-step approach that may help:
Establish a local taekwondo institute
Launch an annual membership with benefits
Enhance and digitise curriculum
Develop programmes for progression, courses and certification cards
Provide insurance for combat sports
Extend scholarships for international tournaments
Offer career paths
Produce engaging content
Community outreach
Hold an annual “Taekwondo Day”
1. Establish a local taekwondo institute
The
governing body would be a recognised authority that goes beyond basic local
regulation of the sport. As the headquarters for all activities, the Taekwondo
Institute will focus on promoting the sport and supporting auxiliary functions
for all local taekwondo clubs, gyms and community groups.
Cost-effective
services
With a
major body comes business bargaining power. When managing a taekwondo club, a
lot of effort is spent on acquiring logistic services (buses to transport
members to and from club locations to various event locations), procuring
ambulances and first-aid services for competitive events, etc. An institute,
however, can issue a countrywide tender for the most cost-effective services
that associate and affiliate clubs get to enjoy. Clubs can rely on the
institute for reputable service providers, while service providers get to sign
a contract for a lease of services to a definite size of consumers. It will be
a fair, open and financially compliant process.
Training
location
The institute will offer a physical space that coaches, clubs and trainers can lease – a comfortable training spot with clean changing rooms and shower facilities.
Courses
New black-belt students must
attend a first aid and automated external defibrillator (AED) course before
they receive their black belt. The course should be administered by a
recognised first-aid training provider. Courses such as close-quarters combat
could also be organised for all 2nd dan and above black belts to enhance their
knowledge of self-defense – and it could be made open to all sparring-level
color belts. These are just some ideas on how to continue the education of the
taekwondo community.
2. Launch an annual membership with benefits
Although the organisation is not for profit, these activities, utilities, logistic bills and salaries will need to be supported financially. We will need an affordable annual membership cost of about $20.
Membership
will come with benefits from the institute. Remember the bargaining power of
the institute. With a numerical advantage, the business development team can
negotiate deals with reflexology and sports massage clinics, health spas,
sports and nutrition shops, medical clinics for screenings, and more.
Members
will also receive a card with proficiency recognition attached to their
membership number. We could even explore the feasibility of a point system for
members to get more perks through fitness tests, etc.
3. Enhance and digitise curriculum
How can we track our progress in this sport? What are the skills a student needs to learn to progress to the next belt? How can new black belts guide new color belts? A black belt knowing how to perform a move doesn’t translate to the method of instruction and pedagogical skills. How often has someone recorded a bout or a certain move to illustrate a taekwondo technique? How about stretching and cool-down techniques?
These
skills are highly essential for an effective class but are heavily dependent on
the experience of the trainer. We could make these instructional videos
available on a mobile application to learn or refresh skills.
How to
perform a 360-degree kick; step-by-step videos of a Koryo Poomsae; conducting a
dynamic warm-up session safely? and effectively; viewing attendance and
achievements of a course; classroom and facilities management; managing
different belt levels in a single class – download the app!
4. Develop training programmes and certification cards
More
often than not, we forget that taekwondo is not just an expression of a
technique or a type of martial art. Black belts must remember that for all our
strengths, the system can be improved when complemented with real-world self-defense
techniques.
We need
to continually improve ourselves by exploring and assimilating the strengths of
other training styles and one approach will be to offer programmes to enhance
the effectiveness of taekwondo.
For
example, an acrobatic class where backflips and aerial moves are taught could
become a recognised proficiency. Physical skills could be organized and
delivered in a progressive curriculum, for example – a beginners course on how to do a kip-up.
How
about ground fighting techniques, as well as human anatomy and physiology? The
content we could create to completement taekwondo knowledge is immense and it
would provide lifelong learning options for practitioners in a
multi-disciplinary approach.
5. Provide insurance for combat sports
For all
the benefits of taekwondo, it is a contact sport and injuries inevitably occur
– and coverage is avoided by most insurance companies. A collective authority
could negotiate a combat-sport insurance that could be open to not just taekwondo,
but also all combat-sport practitioners.
The
opportunity and growth potential lie beyond a country and options for
international coverage could be negotiated with an insurance company for
coverage.
This
would provide peace of mind for competitive individuals as well as regular
practitioners.
6. Extend scholarships for international tournaments
To
promote the sport, we need to elevate the presence of the sport significantly,
and that means nurturing and supporting athletes financially to compete in
high-profile tournaments.
It is basically an investment question of risk versus return. In this case, successful medalists will highlight the achievements of the sport within the country.
Promising
athletes distracted by a full-time job
will have less energy and commitment to the sport. Many countries producing
world-class medalists have a career programme that provides some sort of
stipend or salary to support an athlete, and in Singapore ,there are some
scholarships available, such as the SOF-Peter Lim sports scholarship and Singapore Sports School sports scholarship.
We could
offer scholarships, so promising taekwondo athletes can focus on training and
competitions. Each time athletes achieve a new milestone; they would be
eligible for more benefits and funding.
The penultimate goal is an Olympic medal. But for each Olympian, there are dozens of lower-tier medalists. Instead of ignoring them, we should nurture all with career paths.
The
institute could have a scholarship-athlete management division focused on
developing the next generation of medalists, assisting with the planning of
careers for athletes from the beginning. They would follow a training regime
and, depending on the milestones achieved, different paths are possible. This
way, there would be performance management and progress.
7. Offer career paths
What can
an outstanding individual expect after an illustrious competitive career? Like it or not, athletes have an expiry
date. Statistics have shown that human performance peaks at about 25 years of age for Olympians,
and retirement is a concern for many athletes.
Full-time competitors have a
specific skill set that does not translate well to a corporate environment, and
many individuals likely sacrificed education due to training commitments. Should
a serious injury stop an athlete from progressing further, do we leave that
person in the lurch? Obviously not. Physical skills are not the only component
in taekwondo.
The
institute would offer a variety of occupations with progression pathways for
everyone – in business development, coaching, ancillary services, event
organising, talent management and more.
Supported
by an institute-sanctioned program, our
athletes could go on to work in the media industry as stuntmen and even as
actors. English actor Jason Statham (The Transporter trilogy, 2002-2008; Fast
& Furious franchise, 2013-2019) was a competitive diver, Chinese actor
Jet Li (Once Upon A Time In China series, 1991–1993) was a national Wushu champion from 1974-1978, just to name two people.
On social media, the famous South
Korean K-Tigers performance
and demonstration group produce catchy songs and dances using taekwondo moves
that inspire many to learn more about taekwondo. See some great clips of the
K-Tigers here, here and here.
We could start a YouTube channel producing short exciting clips every week. Something in the style of comedic martial arts theatre JUMP would be a great start – audiences are wowed by the gravity-defying movements of the performers and love the hilarious storyline.
9. Community outreach
We now
have a platform to endorse athletes, a membership system and career route,
among other things. What we need is outreach and engagement. Training is a
lifelong engagement and the earlier one starts taekwondo, the better the
performance outcome.
The
sport teaches self-defense and discipline and is a way for children to expend
youthful energy.
The
institute could organise free public performances and community events, where
people of all ages can experience what it is like to kick a sandbag, kids can
take part in a high-jump challenge, and the pioneer generation can learn some Poomsae movement patterns. There can be
something for everyone at the taekwondo community festival.
10. Hold an annual “Taekwondo Day”
Humans
are constantly looking out for the latest deals, and we have computer fairs,
furniture and home improvement fairs, food fairs and the such.
How
about an annual taekwondo event?
The
Taekwondo Expo could be a weekend-long event where practitioners can bring
friends and family to try the latest gear and combat-sports merchandise, attend
workshops and explore customisation services. Get your tobok (uniform) or belt embroidered
on the spot. Come taste that special nutrition and hydration drink or test the
latest muscle rub. Catch performances, meet interest groups and measure your
body-fat percentage at the health section. Existing vendors in the community
get a free booth, and practitioners can wear their tobok for free entry.
Summary
These
are just some ideas to keep the community engaged as well as promote the
benefits of taekwondo to a larger audience. With tournaments, community
outreach and an annual expo, we can expect each calendar year to be exciting
and fulfilling. Keep fighting!
If you have
any ideas, feel free to leave a comment!
This is part 2 of my future phone 2025 vision article, for part 1, check out the post here. This part details out the features that could be in Phone 2025!
Neuromorphic processor with an “AI-core”
Existing smartphones have been demonstrated to digitize
documents, translate signs, drive a car, solve a Rubik’s cube, and the 2025 phone will become a
butler, providing information that you didn’t know you needed, giving answers
and solutions as you command it, learning your habits, nuances and behaviors to
essentially offset human weaknesses.
For that to happen, the processor needs to be powerful
– as powerful as a human brain, but without its caveats, such as forgetfulness.
The processor will be a multi-SoC (system on chip) and will have the standard
CPU-GPU cores, but with a Vision Processing Unit (VPU) and a neuromorphic core or Neural Processing Unit
(NPU). This CPU-GPU-VPU-NPU processor will pave the way for Artificial
intelligence (AI) of the future.
For the sake of simplicity, I call this neuromorphic
processor an Artificially Intelligent Neural Processing Unit (AI-NPU). With
machine-learning algorithms and neural-network (NN) circuitry, this AI-NPU core
will feature deep-learning capability and the smartphone will learn to
anticipate what I want to do next, my schedules, habits, desires and needs in a
more human-like manner than the semantic feedback we have today.
A neuromorphic core is a processor
modeled after the human brain, designed to process sensory data such as images and sound and
respond to changes in that data in ways not specifically programmed. A learning
and constantly evolving core computing architecture is tremendously efficient
as it finds new and better ways to process a task. It’s like learning how to
ride a bicycle. Despite the complexity of the activity, after a few tries, the
task becomes ingrained and effortless, and the brain now automatically
maintains balance and speed to keep a bicycle in motion.
With human-like anticipation and realism, you will not
be able to tell the difference between your phone and a person. By learning
texting habits, the phone will be able to respond to messages by itself, like
having a bot to reply to those tedious chats. The new processor will make Bixby,
Alexa, Siri and Cortana jealous.
The semiconductor industry has been pretty consistent
in its projected advancements, with major players investing billions of dollars in R&D, and I expect to see a powerful
CPU shrink down in size to fit my smartphone in 2025.
Computing Desktop environment
With all that processing power in a phone, do we really
need a laptop or tablet for everyday computing tasks? The future phone will
become your future laptop or desktop with a simple dock.
The idea is not new. Since 2012 Asus has had a product line, the PadFone, where its smartphone could be
docked into a tablet – increasing the screen real estate and battery life of
the phone.
This desktop functionality concept was recently updated
by Razer’s Linda, Microsoft’s Continuum and Samsung’s DeX. Linda turns a smartphone into a
trackpad that docks into a laptop body, whilst Dex is a dock for a phone which creates a familiar desktop
computing environment. This desktop PC feature will be mainstream in future
phones just by plugging a reversible USB Type-C port into the phone for both
graphics and power. Examples today include Continuum and DeX, which can run from the company’s flagship phones. You’d be surprised
how something so simple still isn’t intuitive enough today.
I envision that in 2025, we will all be carrying our PC
in our pocket, looking for USB-C ports to plug our phones into so we can
display our own instant-on PC at work, a friend’s home, or just about anywhere.
I’ll wake up, undock my phone from its wireless-charging cradle and, when I
reach work, I’ll just dock my phone into the cradle at my desk. There is would
be no need for a dedicated computer at work or at home. All files are stored on
various cloud services (Dropbox, Google Drive, Onedrive), while persistent
files are stored in the phone’s 16TB of storage.
A home, or in the office, projectors and screens receive
wireless display commands from the phone that are compatible with existing wireless display
standards such as
Apple’s AirPlay, Miracast, Intel Wireless Display (WiDi) and DLNA. As a
computing desktop, our 2025 phone will push or stream a desktop screen to any
TV, projector or screen that is compatible.
You’ll finally be free of lugging around a laptop. Just
think about that.
Connectivity
The phone will have the latest connectivity options
built into its communications chips.
5G-New Radio (5G-NR) is slated to replace 4G.
4G-LTE was introduced in 2009 and it took a few years for the infrastructure to
become mainstream. 5G is in its infancy now, and 900% improvement has been demonstrated by Qualcomm over existing 4G networks.
By 2021, we should see 5G become mainstream in mobile
devices and commonplace in the 2025 phone, with upgrades over existing
standards. The increased speed and
bandwidth that enables 5G is the use of a broader spectrum of frequencies and multiple antenna arrays. The standard also allows
device-to-device communication, allowing your phone to be the central hub or
base station controlling all your other IoT gadgets in the vicinity. Major
chipmakers Qualcomm, Intel and Huawei all announced their 5G
modems this year.
As for Wi-Fi connectivity, the standard that is known
as the IEEE 802.11ax, now referred to as Wi-Fi 6 that was just introduced
this year, will be mainstream in our Phone 2025.
The new standard feature – Multiple-input
multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) – allows
bandwidth speeds five times faster than today’s fastest 802.11ac networks. Also,
with CO-MIMO antenna arrays, users will experience even
faster connectivity when there are several base-stations or routers nearby, as
each data stream is broken up or provided by several routers. A very important
speed upgrade when streaming 4K-Mixed-Reality (4K-MR) data streams is aptly
shown in the Vimeo video in the next section.
With new bandwidth pipelines streaming directly to the
future phone, users will demand ever greater instant high-quality content and
information. Search, e-commerce and information avenues will grow ridiculously,
with instant online demand from consumers. It is one of the reasons Google is
paying a premium to have its search engine natively
installed on Apple devices. The revenue line between the future phone and
content/shopping services will blur and we could see major search engines and
retailers putting resources into developing their own phones such as the Google
Pixel 4 and Amazon’s foray into the
smartphone market.
The motivation is simple, the future phone is the de
facto portal to content, products, and services.
The Display
Screen
technology has come a very long way since the last decade of smartphones, with
pixel densities gradually increasing and pixel sizes slowly decreasing, with the
first high-density pixel displays
marketed by Apple, as a “retina-screen”, and Samsung’s Super AMOLED both
exceeding 200-pixels per inch (ppi) then, and today to 458ppi on the iPhone 11
and 401ppi on the Samsung S10.
Our
smartphones have captured all of our visual attention. Americans spend three to four hours a day
looking at their phones,
and about 11 hours a day looking at screens of any kind. Needless to say,
the screen is still the main interactive surface with the Phone 2025, only that the technologies used to
build the screen are going to be amazingly different. Today’s screens are
typically built on AMOLED, IPS-LCD or OLEDs, but upcoming technologies such as
a microLED (mLED) are in the works.
I think that QLEDs (quantum-dot) LEDs will become a
mature mobile screen technology capable of giving us the chromatic vibrance consumers
demand. Quantum-dot displays are not a new technology and are
staples of flagship television products today, with manufacturers touting the
advantages of QLEDs compared to OLED TVs. A nice comparison is described here. However, QLEDs are still nascent
and there is massive commercial push to advance this technology.
I’m just going to call it what it
is, the future phone will sport a QLED 4K ultra-resolution screen likely based
on electroluminescent quantum dots (ELQD) and… it will be transparent. Why do we need a transparent QLED
screen?
We can now hide the front cameras
and sensors behind the screen. Since the nano-pixels of the QLED screen are so
small, tiny holes or gaps can be created between the light-emitting pixels to
allow light through the screen. Looks like Oppo has already unveiled this cool feature!
No more notches or front-facing
sensors taking up precious screen area. Just one big gorgeous edge-to-edge QLED
screen.
Another cool feature of Phone 2025 is the use of smart
nano-optics to create a depth perception, allowing the screen to produce a 3D
in-depth effect, some sort of holographic screen viewing experience, this capability
is important when we use the phone in eXtended Reality (XR) applications.
This new screen is transparent to cameras and biometric sensors behind the screen and allows depth/dioptre correction so that the display adjusts according to the distance that your eyes are away from the screen. If the screen is close to the user’s eyes, it blurs or sharpens reciprocally avoiding the need for corrective optics in XR headsets.
So the trend of having multiple
cameras started out once again from Apple, with the introduction of the iPhone
X’s dual camera, allowing for different lens elements (wide or telephoto). Manufacturers
quickly caught onto the advantages of having more than one camera module and
soon we had triple (Huawei’s P20 Pro & the Apple iPhone 11 Pro),
quad (Samsung Galaxy A9)
and even five cameras (Nokia PureView 9).
The main difference between the cameras are the
different focal lengths. The shorter the focal length, the wider the angle of
view and vice versa. It’s almost like carrying a full set of lenses in your
pocket.
Ok so we’ve got some wide-angle shots and some nice
zoomed-in shots. So what? What can we do with our two eyeballs that our phone’s
camera array will allow us to do better?
It’s
simple physics. More cameras mean the phone can capture more light. Meaning impressive
low-light vision and photography, a feature available in Huawei’s P30,
Google’s Pixel 3
and Apple’s iPhone 11 Pro.
I’m talking about Night-Vision.
So,
the Phone 2025 will have an optically stabilized quad camera element array with
the following camera capabilities:
Telephoto Zoom In 2007, I thought a liquid-zoom lens would be a cool feature to allow for optical-zoom. After all, you still need actual physical distance to focus light from a distance to the sensor. Then Oppo and Huawei both offered phones with embedded lens elements in a periscopic manner within the camera body. That works too, let’s have two in the future phone.
Macro Mode (microscope) With up to 1cm focal distance from the ultra-wide, ultra-high-resolution main camera, loss of resolution to achieve macro-distances
Night Vision mode in real-time First, each sensor combines four pixels into one, and then we have light being captured on all four sensors simultaneously to create a true low-light camera, something popular low-light camcorders are known for. An infrared matrix illuminator beside the camera will help illuminate pitch-black conditions.
True-3D videos A quad camera setup will provide stereoscopic vision and depth-differentiated videos. Because there are now always at least two stereoscopic cameras capturing footage with distance information capture, Phone 2025 essentially becomes a 3D video camera capturing 3D volumetric videos and data. Capture 4K 120 frames per second (fps) 3D-video on this slick future phone.
Super Resolution photos Super Resolution is a technique that combines all the pixels from the different elements to form one ultra-large resolution photo. The technique differs slightly from night-vision mode where all the pixels are layered on top of one another to create a brighter image. Super Resolution photos combine each pixel side by side to make a larger image. There are commercial cameras using this technique – the Light L16 Camera from Light.co contains 16 camera modules – five 28mm ƒ/2.0, five x 70mm ƒ/2.0, and six 150mm ƒ/2.4 lenses giving a combined resolution of up to 52 million+ pixels! In fact LG has filed a patent for a 16-camera module phone. SIXTEEN.
You don’t need that many.
Ultra-Slow-Motion Capture Fast, sensitive cameras plus a crazy powerful processor equal ultra-slow-mo videos. Sony’s Xperia XZ3 can do 960fps at 1080p. I’d reckon Phone 2025 can do 4000fps at 1080p, no sweat. But the higher the fps, the smaller the resolution. Hey, you can’t have everything.
360° 3D videos Something I would like to see integrated into Phone 2025 is a 360° camera. How cool is that? Today’s 360° cameras, such as the Insta360 and GoPro Fusion, already produce jaw-dropping video features such as “over capture”. Because the camera is capturing 360 traditional frames can be captured from the spherical 360° video footage taken from a single camera point – giving the illusion of a panning camera with a moving subject, ‘bullet-time’ effect so on and so forth.
It’s like many cameras capturing the action all at once. This dream phone would be able to capture simultaneous video from the front and rear camera sensors. With four video streams, two from the front and two from the rear cameras, the AI-NPU stitches it all together.
XR eXtended Reality
Phone 2025 has a powerful processor and powerful “eyes”
What else would be cool? Something like Tony Stark’s phone in the movie Iron Man 2. Rather, a mixed-reality with
AI-machine vision that will enable one to mark out or pull data spatially from the
environment.
Today, this is known as a technology-mediated
experience that combines virtual and real-world environments and realities,
often referred to as Augmented Reality (AR) or Mixed Reality (MR), where some
aspect of the real world can be seen, like Microsoft’s HoloLens; or Virtual Reality (VR) like the VIVE
system, where the
user sees a video feed instead of the actual environment.
The ‘X’ in eXtended is a placeholder for virtual
reality V(R), augmented reality A(R) or mixed reality M(R), and XR is can be
used to casually group technologies such as VR, AR and MR together. In a
nutshell, XR allows us to augment digital
objects or information on top of reality, or, conversely, see physical objects
as present in a digital scene. There have been many attempts such as the Ghost, and even this cool hyper-reality video concept done
by Keiichi Matsuda
and another concept by Unity.
Ok so what can we do with XR?
Simple stuff we can do today involves real-time
translation: Google’s Translate app translates multiple languages, Photomath solves any math problem you take a picture of and Google
maps helps you navigate in an urban environment.
When
app stores for the smartphone were introduced, they paved the way for an
industry of applications and businesses with promises of XR-enabled
technologies that would revolutionize the way we interact with our future
smartphones.
How are you
going to control your fancy MR headset? With XR and a computing desktop
environment enabled smartphone chances are, we could end up interacting with
what’s called a Natural User
Interface (NUI).
Despite the options available, the challenge has been miniaturization, the
sensor would have been placed underneath the QLED screen and could either be an
optical sensor or just plain old dual-cameras and machine-vision in action and
that’s not difficult to implement in a mobile device.
The truth is, having an NUI reduces the learning curve of new
applications and is critical in XR applications, where the ability to emulate
holding or interacting with a virtual object will greatly increase usability of
our future phone on many productivity fronts.
Fancy a future with people waving and gesticulating at their phones, that’s
body language indeed.
Hybrid Biometrics and Security
The Phone 2025 represents your
entire digital life, and with that, we will need upgraded security. Since the
first fingerprint sensor on the iPhone5S, there have been some exciting
developments in this aspect, such as facial
and iris-recognition on 2017 flagship smartphones such as the iPhone X and
Samsung Galaxy S9.
But how do designers pursue better
screen-to-bezel ratios without sacrificing fingerprint sensor footprints? This year,
several manufacturers introduced a dozen phone models with under-display
fingerprint sensors, such as the Vivo X21, Oppo R17, Huawei
Mate P30 Pro, Samsung Galaxy S10, Honor 20 Pro and OnePlus 6T provided by
manufacturers from Qualcomm, General Interface Solution (GIS),
O-film Tech,
Fingerprints
and Goodix.
However, we’ve seen that
fingerprint and facial recognition security methods can be spoofed or defeated.
How can we create a more secure device without sacrificing screen real estate? I dub the
next generation of biometrics in the future phone as Multi-factor
authentication (MFA), using no less than five biometric
factors at pseudo-random intervals. Full-display fingerprint scanner,
facial-recognition, capacitive
fingerprinting, blood-flow thermography are technologies that come to mind.
The entire QLED screen would
authenticate each finger-press as we tap anywhere on the screen, something Apple
patented in April 2019, and I envision the future phone to have thermogram
sensors to capture heat information as you use the device. 3D face printouts or
fingerprint hacks won’t work anymore, as the person using the phone must be a
live human being.
Currently, the world’s
smallest thermal camera is the Lepton from FLIR, which is available here
and here,
but at $350, it’s an expensive component to put into a phone. This is where a
lower-cost component such as Panasonic’s thermo-graphic matrix sensor,
known as the GRID-eye AMG8833, could be used.
The future phone will have at least three biometrics, in-screen fingerprint authenticator
checking every time you type on the screen, and a thermal-augmented facial
recognition scan. This MFA approach gives confidence that only the owner
can access his very expensive, high-tech piece of gear.
Imagine using your
phone to unlock your work monitor. There won’t be nosy co-workers trying to
guess your password or spoof your fingerprint reader. There’s nothing to break into
if the device isn’t even there.
Phone 2025 Vision
Looks like I’m going
to wait out the next few phone releases till Phone 2025 is released!
Figure 1. What would we see in the next generation of smartphones? Check out Part 2 to see what I think!
It has been an interesting year
with half a dozen flagship smartphones released within months of each other by
major manufacturers. I thought the fall 2019 series of iPhones was an interesting sign of things
to come and historically speaking Apple’s smartphones have been a benchmark
many strive to achieve. The topic of smartphones can dominate a dinner
discussion, with naysayers and pundits in supportive and dismissive stances on
the features of each new model.
Modern mobile phones have become
complex handheld computers that are expected to perform myriad workhorse and
entertainment functions. To meet the insatiable global consumer demand for the
latest smartphone, new flagship models are released in mere months and each new
smartphone is expected to dazzle consumers with new
differentiating and defining features.
The history of smartphones
changing our lives has since spanned decades and smartphones have since come a
long way. In 2013, I praised the iPhone 5S, and in the last few months major
manufacturers like Apple, Samsung and Huawei have been releasing new flagships
vying for a chunk of the $355-billion pie.
What’s all the fuss about?
The mobile phone race has come a
long way since the first iPhone disrupted the market in 2007 with its iconic
keyboard-less capacitive touchscreen that has largely remained unchanged and
revolutionized future smartphone designs.
Today the industry has become
extremely complex. Manufacturers are scrambling to differentiate themselves
with the smallest of features that could sway consumers to purchase their model
over a rival’s.
There is a plentiful list of
acquisitions of smaller technology companies by major manufacturers such as Apple and Samsung to create a significant
differentiator in their handsets. Individual components that defined a feature
in a handset could have been a purchase of an entire company’s product
portfolio by one of the larger smartphone manufacturers – for instance the purchase of AuthenTec for $356M in 2012 enabled Apple to lead the
market the following year with fingerprint biometrics in the iPhone 5S – a
major leap forward in phone security then.
Now, there is hardly a smartphone
without biometric security, immensely improving user experiences. With the
iPhone X, Apple once again rekindled the spotlight on the decades-old technology of facial
recognition –
which was available since 2012 on phones such as the HTC One X. The difference
is that Apple has vastly improved the feature with a “dot-projector” that
allows the facial recognition camera to work in low-light conditions and
greatly boosts resistance to spoofing attempts that plagued the older generation
of phones with that feature.
Hits and Misses
Over the years, there have been
some hits and misses. One example is an attempt by manufacturers to integrate micro DLP projectors to expand screen real estate by
projecting media and content onto an external surface such as Samsung’s Galaxy Beam and Lenovo’s Smartcast, which received a
lukewarm reception.
Other
misses were the much hyped “modular-phone” approach such as Google’s Project Ara and Motorola’s MotoZ (which is still available). The idea was simple –
users could customize their phone as they liked it – need a bigger battery? Use
a bigger battery module. Need more memory? Swap out a module with one of a
larger memory, and as better components were introduced to the market, users
could sequentially upgrade older components with newer modules and not have to
replace their entire phone. The concept was desirable on paper, but Google’s
Ara project never entered mass production and I don’t know anyone using a MotoZ
phone.
Then we have the first foldable smartphone with a flexible screen, the Samsung Galaxy Fold. The ambitious eye-watering $2,000 device got people excited with the idea that you could expand your phone to provide larger screen real estate, however it was possibly rushed to production resulting in a massive media disaster where many reviewers and users’ devices failed just days into use.
Trends Today
In 2018, 1.56 billion smartphones were sold worldwide and this trend is fueled by the
ravenous demand of consumers clamoring for more features and capabilities from
their handsets.
Mobile displays have resolutions exceeding what the
human eye can discern and their touchscreens have sensitivities greater than
our skin. These components are often very difficult to produce and competing
companies are forced into partnerships for parts sophisticated or too expensive
to produce for one smartphone model – for instance, Apple acquires its memory chips
and OLED screens from its rival Samsung for use in its iPhone X. It’s a peculiar relationship,
where the bulk of Samsung’s revenue comes from selling its best parts to its
competitor. These are used in Apple’s flagship phone, which outsells Samsung’s
own flagship phone, but when the iPhone X succeeds, so does Samsung!
Major players now use one another’s Intellectual
Property (IP) – a report breakdown of the iPhone X reveals that most
of its components are manufactured by other semiconductor companies. This
complex labyrinth of manufacturing logistics has spawned a global behemoth of
Original Equipment Manufacturers (OEMs), where companies produce parts that are
then resold or repackaged by another manufacturer.
The line blurs here, where now manufacturers have the
option of selecting parts of similar specifications and capabilities produced
for one brand for their own and hardware differentiation becomes
more difficult
moving forward, when every new flagship smartphone has very similar
specifications as its rival. Fast processor? Check. High-definition screen?
Check. Low-light zoom camera? Check. Waterproofing? Check.
This level of component inter-reliability is unprecedented,
with YouTuber Scotty Allen building a working Android phone and iPhone using back-alley components in Shenzhen, China.
Besides hardware features, software and user-experience
environment of the operating system becomes a glaring differentiator. Various
manufacturers add their own flavors based on their corporate strengths, such as
Google’s “unlimited” storage, where it provides its own cloud storage feature
(Google Drive) on its Pixel series – a seamless experience emulating a phone that doesn’t
have a storage capacity limitation. Other manufacturers have introduced their
own OS features such as Apple’s Siri and Samsung’s Bixby personal assistants, into their phones.
When it comes to battery life, as
there is a mismatch in technological advancement between chips and batteries,
it is more difficult to pack more energy into the same volume than transistors.
As processors get more capable and powerful, phone makers are compensating for
this incongruity by decreasing the size of the electronics to allow more space
and volume for batteries. As a result, phone designs have largely plateaued
into the same design across the market. A flat piece of metal and glass.
Unfortunately, when it comes to
hardware – there are only so many transistors one can cram into a processor or
sensor element. Semiconductor companies are packing more features and functions into their
chips using
increasingly sophisticated and expensive manufacturing methods that only the
big boys can afford.
It’s like watching a marathon
where best runners are neck and neck and no one can discern a clear winner,
whilst the rest of the competition has fallen far behind or dropped out.
Vision then and now
January 9th, 2007 was the day the world changed. Apple co-founder Steve Job presented the iPhone, which revolutionized the smartphone from a clunky keypad device to a desirable, sleek capacitive touch-screen communicator. That titular event rocked the mobile phone and computing industry and reinvented the meaning of a “mobile phone”.
I remember the showcase vividly, enamored with how
technology leapt overnight. I realized we were in for a very different future
and sketched what I thought would be the phone of 2010.
Inspired by the first iPhone and the possibilities it
would bring and with existing advancements in 2007, I envisioned the following
features:
A liquid-zoom
lens that allowed actual telephoto-zoom capability into
the existing camera without physically moving a lens assembly.
16Gb of
memory (note that 2007-era phones had memory in the hundreds of megabytes).
USB3.0
charging port for high-speed data transfer and power charging and a large
4000mAh lithium-polymer battery to power this beefy device.
Wow. The future was something
to look forward indeed.
What has happened since?
Then 2010 came and the world
got Apple
iPhone 4 and Google’s
Nexus One. Both didn’t quite achieve my vision. The iPhone 4
had only 512Mb of DRAM memory and other flagships of the era had up to 1Gb of
on-board memory, a tad short of my envisioned 16Gb.
However, Samsung supplanted
that limitation with an external microSD card slot which allowed users to add
aftermarket memory of up to 32Gb in 2010. Moreover, those devices also
introduced features such as a high-resolution “retina
display”, video-chat and a gyroscope sensor to complement the
accelerometer. The addition of a high-resolution screen, a more powerful
processor and more sensors enabled a new generation of mobile games that were
controlled by the physical pan and tilt actions of the user.
A mark of exciting times.
Today,
there is no shortage of projections of “the future smartphone” with jazzy ideas
such as a foldable or bendable phone and fully transparent screens constantly
being featured by concept artists.
Unfortunately, whilst imaginative, there is a major
difference between an artistic concept and a manufacturable design, a fine
balance that Apple has been very successful in marrying. It is easy to envision
a bendable phone by introducing existing foldable batteries or flexible
electronics. However, unusual prototype or concept designs are notoriously
challenging to scale up to a mass production that meets consumer demand, or
very expensive to manufacture due to the low yield rate of a novel ingredient.
One missing component could require a complete redesign or an elimination of
that feature altogether.
An example would be a bendable phone, demonstrated in November 2018 by
Samsung and Royole. It’s arguable that whilst all components required to make a flexible
smartphone exist, there are a few problems – yield, availability and cost.
Flexible electronics do not yet have the component density of more established
rigid printed circuit boards (PCBs). There are fewer suppliers in the industry,
which means a higher cost and a lower yield to achieve the same performance of
rigid PCB counterparts. What about failure rates?
Samsung’s folding phone was a massive disaster, as no one wants a flexible
screen that fails after several hundred “folds” – a rigid screen is still more
reliable. Likewise, the same goes for a flexible battery, which does not have
the same energy density as the traditional lithium-ion battery pack, which has
an abundance of suppliers. These reasons are why I don’t think flexible phones
will become mainstream soon.
With these considerations and the current market inclinations, a future phone must be feasible, manufacturable and practical. With the highly random and unpredictable rapid advancement of technology amalgamated with the complexities of global manufacturing logistics and market economics, I’ve decided to envision a phone six years into the future. I present my Phone2025 concept, see my article on Part 2 of this segment!
“We are such stuff as dreams are made on, and our little life is rounded with a sleep.”
William Shakespeare, The Tempest
I present to you the Ontomorphic Quantum Processor – a beauty that came to me in a dream.
Sometime in the future
Imagine a scene out of the 2004 science-fiction action film I, Robot. Four men skilled in combat, myself included, were battling a humanoid robot in a tiny, claustrophobic room. We had trouble subduing it. The robot was nearly as quick as us – but it seemed invulnerable, with a tough composite alloy body. It fought in a windmill style, swinging its arms- metal arms that could cause serious damage to flesh and bone – in circles while rushing at us. There were no weaknesses we could exploit. It did not register pain and attacking it was like striking a lamp post.
The robot was state-of-the-art – more advanced than anything I’ve encountered in my dreams. It could process multiple assailants via its visual feed and anticipate attack vectors before we made our moves, compensating for its slower artificial muscle actuators.
I postulated that its processor is unlike anything humans have constructed. It likely created a real-time 3D response map and simulated every possible scenario and angle of attack from aggressors, while learning and analyzing fight patterns – think Ironman’s AI analyzing Captain America’s movements and countering them (in the Marvel Universe). Only our numerical superiority and teamwork finally brought down this robotic destroyer.
We removed the robot’s chest plate and, through a maze of wiring, found a cryogenic containment system. Why would a robot need a cryogenic system? One of my companions vented liquid helium from the vacutainer, nearly cold-burning his finger in the process. He released the inner pressure seal and that was when we witnessed this most advanced processor.
I consider myself relatively knowledgeable in the field of technology and we had a tech-wizard on the team who could lingo-speak with Tony Stark any day. The technology in front of us was generations ahead of anything we had faced and, given the paraphernalia required to run the processor, it became obvious what we had on our hands.
How a brain works
Our brains comprise neurons connected by dendritic synapses. Signals are first created from a neuron, known as an action potential, which is an electric signal created from chemical charge carriers known as ions. This electrochemical charge is then transferred via ions and neurotransmitters from one neuron to another. More details on the process are described here.
Computer processors work similarly. Almost every processor in use today is based on the manipulation of electrons – hence the term “electronics”. All information technology we have today follows the basic principle of sending electrons where we want them to go.
Batteries store electrons, transistors funnel and direct electrons, LEDs convert electrons to photons and, on a larger scale, integrated circuits are a bunch of transistors and switches turning on and off depending on how or where we want the electrons to go to.
Our brains process and store information via electrical signals, very similar to how computers do it. The only difference is we use neurons and computer processors, transistors. The problem with today’s processors is that you can cram only so many transistors into a piece of silicon.
Eventually, traditional transistor-based processors encounter heat and electron-leakage problems.
The future needs a futuristic processor.
Robotic brains in science-fiction
Science-fiction abounds with the fantastic imagination
of writers. The droids in I, Robot and the Star Wars film franchise are built
with “positronic
brains“, while the killer robots in the Terminator film franchise use
“neural net
CPUs“.
Since a positron or antielectron is the antiparticle or
the antimatter counterpart of the electron, I don’t think it’s implausible to
have a positronic processor as the manipulation of positrons could yield
superior processing power. Positrons are subatomic particles that have the same
mass as an electron, but a positive instead of a negative charge. When these
two particles encounter each other, they annihilate and produce two or three
gamma-ray photons, (high-energy light) in an event referred to as electron-positron
annihilation.
I’ve yet to see scientific evidence of how a positron manipulation is possible today and the most advanced scientific research into positrons are the creation and study of positrons. So, a positronic brain is still in the realm of science-fiction.
I’ve yet to see scientific evidence of how a positron
manipulation is possible today and the most advanced scientific research into
positrons are the creation and study of positrons. So, a positronic brain is
still in the realm of science-fiction.
Now, the neural net CPU in the Terminator franchise is
described to be based on quantum effect chips. Quantum computers
are no longer the stuff of science-fiction and are even commercially available
from Google and IBM.
Today’s quantum computer systems are in their
infancy and fraught with engineering challenges. They are almost comparable
to the first integrated circuit released in 1958. Over the last
six decades, the first integrated circuit has now become a supercomputing
device that fits in our palm – what we now know as a smartphone. Imagine going
back in time and showing someone from the 1960s the capabilities of your
smartphone. That piece of glass and metal in your hands would be considered
magic. The technology required to create a smartphone would have been
unfathomable then.
Ontomorphic quantum processor
Some 40 years into the future, the Ontomorphic Quantum Processor, which we dug out of the robot, is a self-learning quantum processor that does not need to be chilled to absolute zero to maintain quantum states. It is based on known science but is still beyond the capabilities of technology now.
The processor’s four interior walls contain four silicene/graphene-based quantum nano-electronic circuit boards. The circuits are then connected via what I call the electro-optic lattice. The lattice structure comprises optical and electrical conducting strings. The optical conductors transmit information photons of light via optical fibers made from yttrium aluminum garnet or sapphire crystals. The electrical wires are made from multi-wall carbon nanotubes that are superconducting in the cryogenic socket.
Four quantum circuits are enclosed by a large quantum memory crystal based on a diamond. This diamond-based quantum memory stores quantum information that is transmitted across the lattice via photons. If you compare it with today’s conventional processors, you could call this a quad-core CPU with shared memory, something already present in graphic processing units.
The lattice acts like a neural network, shoving light-signals where they need to go at near-light speed, and changes depending on the information being processed, which brings about its “learning capabilities” or the ontomorphic portion of this processor.
Wait… what?
Ontomorphic?
The word ontomorphic does not exist, but ontogeny refers to the inception and lifelong development of an organism physically and psychologically to its eventual maturity and subsequent senescence. I use this word because our human learning capabilities come from our interaction with our environment and experiences throughout our lives.
When one learns to perform a task, like ride a bicycle, do a cartwheel or master a new language, one is often awkward, clumsy and inefficient. But over time and practice, the brain forms new synaptic connections to streamline the knowledge.
One gets better, more efficient. That is the formation of new neural pathways in the brain, but the total number of neurons remains relatively the same.
This is the same for the Ontomorphic Quantum Processor. The number of quantum gates is limited by the initial design and construction of the processor, but the electrooptic lattice allows signals to be routed more efficiently over time. A simple example could be: “How can we arrive at 100 from 0?”
A child could start with 1 + 1 + 1 + 1 + 1… till he reaches 100. For this to happen, the child needs to understand the concept of numbers and the arithmetic function of addition. He would eventually arrive at the number 100. A triple-digit constant. Great!
Could this be done more effectively? A child introduced to the multiplication function could attempt a more efficient approach: 10 x 10 = 100. Even better. But now, the child needs to commit to memory what multiplication does and the tables associated with it.
All this assumes that no errors occur in the process, which is almost impossible. Which is where I come to the morphic capabilities of the
Ontomorphic Quantum Processor.
Learning and evolving from error
What is often referred to today as evolutionary or mutation computation is essentially a computer attempting a trial-and-error process to determine the most efficient and optimal solution. There will come a point where memorizing the entire multiplication tables will take too much memory to be viable or useful to the individual. What’s 5424 x 2413?
Yes, one could learn novel arithmetic to compute that mentally, but most adults will reach for a calculator. The process is comparable to determining that there’s a shortcut through an alley on your way home or discovering that a button on the photocopying machine scans a two-page document in half the time.
Evolution computation has been used to design more efficient antennas [12] and chairs, often exceeding what humans can envision manually. The ontomorphic capabilities of the Ontomorphic Quantum Processor come from the new junctions and spin-spin interactions from the electrooptical lattice, known today as neural networks [13], but much more advanced and far faster at near-lightspeed interaction.
This processor learns and becomes better at what it’s instructed to do.
Quantum computer
The primary principle of the Ontomorphic Quantum Processor is quantum logic [1], using quantum-mechanical superposition and/or entanglement to perform computation functions. Quantum computers are vastly different from traditional computers in that they use quantum logic gates and qubits and have the potential to compute complete equations hundreds of millions of times better than a traditional computer. In that perspective, today’s most advanced transistor-based processors would look like an abacus beside a quantum computer system.
Silicene/graphene nanoelectronic board
Quantum logic gates are delicate structures and
traditional printed circuit boards aren’t going to cut it. So silicene/graphene-based
boards are required [2] as a foundation for the nano-electronic circuits that
contain billions of quantum gates [3]. Silicene is an allotrope of
silicon, much like graphene
is an allotrope of carbon. Both have hexagonal honeycomb structures and exhibit
remarkable properties of electrical conductivity and functionalization Silicene
would provide the base for traditional transistor construction with its band-gap tunability
and stronger spin–orbit coupling which is important to maintain the Quantum spin Hall
effect. It’s as small as it gets – atomic-level transistors.
Graphene will be utilized as the circuit foundation,
for it is better at conducting electricity than copper, which makes it ideal
for ultra-fast circuits [4]. Moreover, graphene’s photovoltaic
effect has been shown to conduct electricity after absorbing light [5].
These two incredible properties of graphene mean optoelectrical signals can be
transferred from quantum gate to quantum gate at ultra-fast near-lightspeeds
[6].
The silicene/graphene nanoelectronic board will contain all the quantum gates and convert the signals and information from light to electricity and vice versa.
Quantum memory diamonds
As you can imagine, you probably can’t use traditional memory to store quantum information. Diamonds being used as quantum memory is a recent development [7-11]. Normally, a diamond is composed of only carbon atoms in a tetrahedral structure. Introducing a nitrogen atom into the structure instead of carbon and specific sites leaves a hole or vacancy in the crystal lattice. The nitrogen atom and the empty site can accept different quantum states and are used to store a quantum bit of information [11].
Diamond is an ideal material for quantum memory as the crystalline structure achieves strong coupling between phonons and vacancy spins which can be stored or read from as pulses of light, known as phonon-mediated quantum photonics [7, 8]. The probable reason why the diamond is arranged in such a way is to allow for shared-memory and faster access to memory from one nanoelectronic circuit board to another. Plus, diamond is an excellent heat conductor because of the strong covalent bonding and low photon scattering. Thermal conductivity of natural diamond is measured to be about 2200W/(m.K), five times more than silver, the most thermally conductive metal. This allows the whole processor to be effectively cooled to just above absolute zero to reduce quantum errors.
Year 2060
Computers have come a long way since 1960 and will continue to go further. Historically, there is an incredible breakthrough every century with major advancements in technology, from the bronze and iron ages to modern industrial, atomic and space ages.
Today, we are at the forefront of the information age
that is seeing no sign of stagnating, and systems are already incredibly
impressive. Isaac Asimov’s robot science-fiction novels have captured the
imaginations of many and many of his stories have become fact in recent years.
2060 will mark one century of computing progress and possibly the quantum age of mankind. Hopefully, I will live long enough to see it.
The performance of the Ontomorphic Quantum Processor will be unlike anything we can imagine today. It would perform more accurately and incredibly faster than any human can. That’s gonna rattle some cages.
Imagine seeing one firsthand. Now that will be exciting.
I’ve had my trusty 2TB Western Digital Passport for a while now, and a couple of thumbdrives of varying capacities lying around and as file sizes get bigger, instead of “how much capacity”, the question is now “how fast can I read/write my stuff?”
Transferring a 40Gb ISO file took forever, and I thought it was high time to upgrade. One of the biggest improvements in computing in the last decade was the growth of flash storage (storing data on chips instead of magnetic discs). Think about how much boot up and loading time SSDs have saved you. Speed aside, SSDs also have a size advantage. Today, it is possible to cram as much as 2TB of storage onto an M.2 drive the size and weight of a stick of chewing gum. Since all my PCs are on SSDs now it’s time to move away from hard drives. Up to 256Gb thumb drives exist now and Hardwarezone reviewed a couple of external SSD-based storage gadgets here. If you need more storage, there’s always Kingston’s new DataTraveler Ultimate Generation GT in 1 and 2 TB capacities. Kingston’s previous largest flash drive, the 1 TB DataTraveler HyperX Predator, is currently selling for over US$1,400 on Amazon as of July 2017. Yea. A thousand dollars for a thumb drive, oh well bragging rights are never cheap. I didn’t really need to carry 2Tb around all the time, and one grand is too much to stomach for a flash drive, so I went and assembled my own SSD-based thumb drive from an M.2 mSATA SSD. I got the M.2 to SSD converter enclosure here. There are other sellers that sell this item, however, it lacks a model number and thus you must search for it with generic search terms such as “NGFF USB 3.0”. You can pick one of these up for around $10 US. My SSD is a standard desktop grade M.2 by Adata SP900 2280 SATA in 512GB, based on synchronous MLC NAND flash and LSI SF-2281 controller, which I got for about $300.
Assembly
Pitting them head-on, both plugged into the USB3.0 port of my PC.
Performance benchmarking
As expected, on an OS with UASP support, in this case Windows 10, we can connect in UASP mode. I was getting 427 – 486MB/s read and topping out at 230 – 260MB/s write speeds on the SSD across two benchmarking utilities, both more than ten times faster than the hard drive.
Real-world file transfer
A simple un-optimized transfer with individual file sizes exceeding 4Gb saw an average speed of 192MB/s, nicely transferring 26.6Gb of data is 180 seconds, or about 3 minutes, reasonable with a bus write speed of 260Mb/s, the same transfer would have taken 1,016 seconds or 17 minutes on the harddrive!
Power consumption
Power consumption is always a concern when you’re mobile, in this case my SSD consumes about 30% less power when active than my hard drive and idles at 139mA on average when there is no activity, significant when you’re on the road running off battery power.
The SSD consumes between 0.14A and 0.36A when idle and active (read/write).
Teardown
For geeks, the controller is based on the ASM1153E USB 3.0 to SATA III controller chipset from AS Media, the SOIC-8 PH25Q40B chip beside it is a SPI 4Mbit flash memory, which is likely used to store device ID reported to the host device as well as specific addresses of programming. Findchips and Octopart yielded nil results, but looking at the pin-outs and the footprint of the chip, it could be a clone of a similar 4MBit SPI 512KB x 8 NOR flash memory by Winbond with the part number “W25Q40B”
Verdict
Conclusion: All in all, I’m pretty satisfied with the results and hopefully it’ll last me for the next half a decade as my previous storage devices have reliably done so!