there’s a bluebird in my heart that wants to get out but I’m too tough for him, I say, stay in there, I’m not going to let anybody see you.
there’s a bluebird in my heart that wants to get out but I pour whiskey on him and inhale cigarette smoke and the whores and the bartenders and the grocery clerks never know that he’s in there.
there’s a bluebird in my heart that wants to get out but I’m too tough for him, I say, stay down, do you want to mess me up? you want to screw up the works? you want to blow my book sales in Europe?
there’s a bluebird in my heart that wants to get out but I’m too clever, I only let him out at night sometimes when everybody’s asleep. I say, I know that you’re there, so don’t be sad. then I put him back, but he’s singing a little in there, I haven’t quite let him die and we sleep together like that with our secret pact and it’s nice enough to make a man weep, but I don’t weep, do you?
If [economic thinking] cannot get beyond its vast abstractions, the national income, the rate of growth, capital/output ratio, input-output analysis, labour mobility, capital accumulation; if it cannot get beyond all this and make contact with the human realities of poverty, frustration, alienation, despair, breakdown, crime, escapism, stress, congestion, ugliness, and spiritual death, then let us scrap economics and start afresh.
Are there not indeed enough “signs of the times” to indicate that a new start is needed?
“Minimalism is not a style, it is an attitude, a way of being. It’s a fundamental reaction to noise, visual noise, disorder, vulgarity. Minimalism is the pursuit of the essence of things, not the appearance. Minimalism is beyond time. It is timelessness. It is the stillness of perfection.”—Massimo Vignelli (via eelhound)
“Where the mood of the moment is solitary and quiet it is called sabi. When the artist is feeling depressed or sad, and this particular feeling of emptiness catches a glimpse of something rather ordinary and unpretentious in its incredible “suchness,” the mood is called wabi. When the moment evokes a much more intense, nostalgic sadness connected with autumn and the vanishing away of the world, it is called aware. And when the vision is hinting at an unknown never to be discovered, the mood is called yugen.”—From Alan W. Watts, The Way of Zen, p. 176. (via thenightlymirror)
I'm interested in knowing more of this peculiar Japanese concept of beauty, both classic and contemporary. Is there any other unique Japanese words describing arts & aesthetics that you can share? Thanks.
9 Elements of Japanese Aesthetics
1. “Imperfection”: Wabi-sabi (侘寂) is the beauty of things that is “imperfect, impermanent, and incomplete”.
Wabi is the quality of a rustic, yet refined, solitary beauty. Sabi means things whose beauty stems from age - the patina of age, and the concept that changes due to use may make an object more beautiful and valuable.
As things come and go, they show signs of their coming or going and these signs are considered to be beautiful.
Sakura 桜 (cherry blossoms) in spring or Koyo 紅葉 (autumn colors) in fall represents wabi-sabi - they are aesthetically pleasing because they don’t last.
2. “Elegance”: Miyabi(雅) is about elegance, refinement, or courtliness. Sometimes refers to a “heart-breaker”, Miyabi demanded the elimination of anything that was absurd or vulgar.
Kinkaku-ji 金閣寺(Temple of the Golden Pavilion) in Kyoto, Japan.
3. “Subtle”: Shibui(渋い) or shibusa (渋さ) is a simple, subtle, and unobtrusive beauty. It means that things are more beautiful when they speak for themselves.
A Bizen sake carafe. The beauty of it doesn’t need announcement; its quality speaks for itself. It involves the maturity, complexity, history, and patina that only time can bring.
4. “Originality”: Iki (粋) is about a refined uniqueness. It is an expression of simplicity, sophistication, spontaneity, and originality. Iki is also about originality, uniqueness and spontaneity that is more audacious and unselfconscious while still remaining measured and controlled.
Kimonos were simple and minimal, often striped and coloured to deep shades of blues and greys on the surface. However, the insides were lined with opulent silk, designed so that only the sophisticated could recognise their secret luxury.
On the other hand, a geisha 芸者 also embodies Iki - they are beautiful, sophisticated but they don’t have the intention to stand out. They combine sassiness with innocence, sexiness with restraint.
5. “Slow, accelerate, end”: Jo-ha-kyū (序破急) infers a tempo that begins slowly, accelerates, and then ends swiftly.
The idea of jo-ha-kyū is used by Japanese traditional arts such as tea ceremony and martial arts.
6. “Mysterious”: Yūgen (幽玄) triggers feelings too deep and mysterious for words. It shows that real beauty exists when, through its suggestiveness, only a few words, or few brush strokes, can suggest what has not been said or shown, and hence awaken many inner thoughts and feelings.
The Dragon of Smoke Escaping from Mt Fuji (Katsushika Hokusai 葛飾 北斎)
Hence, ethics and discipline make things more attractive.
Japanese martial arts aren’t about the result: defeating your enemy. They’re about the path that gets you there. They see no value in a short cut — even when the end result is the same.
Japanese tea ceremony: A cup of tea is trivial compared with the process of making, serving and consuming the tea. The process is the art.
8. “The Void”: Ensō(円相) means “circle”. It is a form of the art of minimalism common in Japanese designs and aesthetics. It symbolizes absolute enlightenment, strength, elegance, the Universe, and the void.
In Zen Buddhist painting, ensō symbolizes a moment when the mind is free to simply let the body/spirit create.
At first glance, an ensō may appear to be just a circle. But its symbolism represents the spiritual growth of the artist – the brushwork, which include dragging, pressing, and sweeping techniques, reveals the depth of enlightenment he/she has reached up to that point. “It is said to be a picture of the mind” explains award winning calligrapher Shoho Teramoto, “because the circle projects one’s mind directly.”
9. “Cute”: Kawaii(かわいい) is the quality of cuteness in the context of Japanese culture. It has become a prominent aspect of Japanese popular culture, entertainment, clothing, food, toys, personal appearance, behavior, and mannerisms.
It happens quickly—more quickly than you, being human, can fully process.
A front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there’s too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall.
Your robot, the one you paid good money for, has chosen to kill you. Better that, its collision-response algorithms decided, than a high-speed, head-on collision with a smaller, non-robotic compact. There were two people in that car, to your one. The math couldn’t be simpler.
This, roughly speaking, is the problem presented by Patrick Lin, an associate philosophy professor and director of the Ethics + Emerging Sciences Group at California Polytechnic State University. In a recent opinion piece for Wired, Lin explored one of the most disturbing questions in robot ethics: If a crash is unavoidable, should an autonomous car choose who it slams into?
It might seem like a simple thought experiment, a twist on the classic “trolley problem,” an ethical conundrum that asks whether you’d save five people on a runaway trolley, at the price of killing one person on the tracks. But the more detailed the crash scenarios get, the harder they are to navigate. Assume that the robot has what can only be described as superhuman senses and reaction speed, thanks to its machine reflexes and suite of advanced sensors. In that moment of truth before the collision, should the vehicle target a small car, rather than a big one, to err towards protecting its master? Or should it do the reverse, aiming for the SUV, even if it means reducing the robo-car owner’s chances of survival? And what if it’s a choice between driving into a school bus, or plowing into a tree? Does the robot choose a massacre, or a betrayal?
The key factor, again, is the car’s superhuman status. “With great power comes great responsibility,” says Lin. “If these machines have greater capacity than we do, higher processor speeds, better sensors, that seems to imply a greater responsibility to make better decisions.”
Current autonomous cars, it should be said, are more student driver than Spider-Man, unable to notice a human motorist waving them through an intersection, much less churn through a complex matrix of projected impacts, death tolls, and what Lin calls “moral math” in the moments before a collision. But sensors, processors and software are the rare elements of robotics that tend to advance rapidly (while actuation and power density, for example, limp along with typical analog stubbornness). While the timeframe is unclear, autonomous cars are guaranteed to eventually do what people can’t, either as individual sensor-laden devices, or because they’re communicating with other vehicles and connected infrastructure, and anticipating events as only a hive mind can.
So if we assume that hyper-competence is the manifest destiny of machines, then we’re forced to ask a question that’s bigger than who they should crash into. If robots are going to be superhuman, isn’t it their duty to be superheroes, and use those powers to save as many humans as possible?
* * *
This second hypothetical is bloodier than the first, but less lethal.
A group of soldiers has wandered into the kill box. That’s the GPS-designated area within which an autonomous military ground robot has been given clearance to engage any and all targets. The machine’s sensors calculate wind-speed, humidity, and barometric pressure. Then it goes to work.
The shots land cleanly, for the most part. All of the targets are down.
But only one of them is in immediate mortal danger—instead of suffering a leg wound, like the rest, he took a round to the abdomen. Even a robot’s aim isn’t perfect.
The machine pulls back, and holds its fire while the targets are evacuated.
No one would call this kind of robot a life-saver. But in a presentation to DARPA and the National Academy of Scientists two years ago, Lin presented the opposite what-if scenario: A killer robot that’s accurate enough to shoot essentially every one of its target.
According to Lin, such a system would risk violating the Geneva Conventions’ article on restricting “arms which cause superfluous injury or unnecessary suffering.” The International Committee Red Cross developed more specific guidelines in a later proposal, calling for a ban on weapons with a “field mortality of more than 25% or hospital mortality of more than 5%.” In other words, new systems shouldn’t kill a target outright more than a quarter of the time, or have more than a five percent chance of leading to his or her death in a hospital.
“It’s implicit in war, that we want to give everyone a fair chance,” says Lin. “The other side probably aren’t all volunteers. They could be conscripted. So the laws of war don’t authorize you to kill, but to render enemy combatants unable to fight.” A robot that specializes in shooting people in the head, or some other incredibly effective, but overwhelmingly lethal capability—where death is a certainty, because of superhuman prowess—could certainly be defined as inhumane.
As with the autonomous car crash scenario, everything hinges on that level of technological certainty. A human soldier or police officer isn’t legally or ethically expected to aim for a target’s leg. Accuracy, at any range or skill level, is never a sure thing for mere mortals, much less ones full of adrenaline. Likewise, even the most seasoned, professional driver can’t be expected to execute the perfect maneuver, or the ethically “correct” decision, in the split-second preceding a sudden highway collision.
But if it’s possible to build that level of precision into a machine, expectations would invariably change. The makers of robots that do bodily harm (though intention or accident) would have to address a range of trolley problems during development, and provide clear decisions for each one. Armed bot designers might have it relatively easy, if they’re able to program systems to cripple targets instead of executing them. But if that’s the clear choice—that robots should actively reduce human deaths, even among the enemy—wouldn’t you have to accept that your car has killed you, instead of two strangers?
* * *
Follow this line of reasoning to its logical conclusion, and things start to get a little sci-fi, and more than a little unsettling. If robots are proven capable of sparing human lives, sacrificing the few for the good of the many, what sort of monster would program them to do otherwise?
And yet, nobody in their right mind would buy an autonomous car that explicitly warns the customer that his or her safety is not its first priority.
That’s the dilemma that makers of robot vehicles could eventually face if they take the moral and ethical high road, and design them to limit human injury or death without discrimination. To say that such an admission would slow the adoption of autonomous cars is an understatement. “Buy our car,” jokes Michael Cahill, a law professor and vice dean at Brooklyn Law School, “but be aware that it might drive over a cliff rather than hit a car with two people.”
Okay, so that was Cahill’s tossed-out hypothetical, not mine. But as difficult as it would be to convince automakers to throw their own customers under the proverbial bus, or to force their hand with regulations, it might be the only option that shields them from widespread litigation. Because whatever they choose to do—kill the couple, or the driver, or randomly pick a target—these are ethical decisions being made ahead of time. As such, they could be far more vulnerable to lawsuits, says Cahill, as victims and their family members dissect and indict decisions that weren’t made in the spur of the moment, “but far in advance, in the comfort of corporate offices.”
In the absence of a universal standard for built-in, pre-collision ethics, superhuman cars could start to resemble supervillains, aiming for the elderly driver rather than the younger investment banker—the latter’s family could potentially sue for considerably more lost wages. Or, less ghoulishly, the vehicle’s designers could pick targets based solely on make and model of car. “Don’t steer towards the Lexus,” says Cahill. “If you have to hit something, you could program it hit a cheaper car, since the driver is more likely to have less money.”
The greater good scenario is looking better and better. In fact, I’d argue that from a legal, moral, and ethical standpoint, it’s the only viable option. It’s terrifying to think that your robot chauffeur might not have your back, and that it would, without a moment’s hesitation, choose to launch you off that cliff. Or weirder still, concoct a plan among its fellow, networked bots, swerving your car into the path of a speeding truck, to deflect it away from a school bus. But if the robots develop that degree of power over life and death, shouldn’t they have to wield it responsibly?
“That’s one way to look at it, that the beauty of robots is that they don’t have relationships to anybody. They can make decisions that are better for everyone,” says Cahill. “But if you lived in that world, where robots made all the decisions, you might think it’s a dystopia.”
“We thought of life by analogy with a journey, a pilgrimage, which had a serious purpose at the end, and the thing was to get to that end, success or whatever it is, maybe heaven after you’re dead. But we missed the point the whole way along. It was a musical thing and you were supposed to sing or to dance while the music was being played.”—Alan Watts (via elmirastorm)
“Let’s suppose that you were able, every night, to dream any dream you wanted to dream. And that you could, for example, have the power within one night to dream 75 years of time, or any length of time you wanted to have. And you would, naturally, as you began on this adventure of dreams, you would fulfil all your wishes. You would have every kind of pleasure you could perceive. And after several nights of 75 years of total pleasure each you would say “Well, that was pretty great. But now let’s have a surprise. Let’s have a dream which isn’t under control. Where something is going to happen to me that I don’t know what it’s going to be.” And you would dream that and come out and say “Wow, that was a close shave, wasn’t it?” And then you would get more and more adventurous, and you would make further and further gambles as to what you would dream, and finally, you would dream where you are now. You would dream the dream of living the life that you are actually living today.”—Alan Watts (via ispeakphotographyx)
“Our children are disconnecting with nature. By the time they are seven years old, most youngsters have been exposed to more than 20,000 advertisements. They can identify 200 corporate logos, but they cannot identify the trees growing in their front yards.”—Celeste Mary Barker (via compassum)