top of page

                              AFTERWORD - Listen Carefully

 

      When I finished the book in late 2021, I'd included a chapter on the ethics of AI and the implications of 'learning to speak whale'. But the book was already very long and my team felt that it was already a big challenge to get readers to take seriously the idea that we might decode animal communications, let alone how to manage this capability!
      But then things moved fast, and Roger became ill, and with the paperback coming out I felt it was a good chance to write an Afterword about him more and about some of the ideas on ethics and animal AI. This was published in Spring 2023 and I've posted it here too so everyone who bought the book in the earlier edition can engage with it (and anyone else too, though some of the references to people, whales and projects in the book might be a bit mystifying).  - TMRM 

LISTEN CAREFULLY

A lot can change in a few months, and a lot has changed since this book was released. In the AI world, in the world of animal translation, in my world. Some of it is wonderful, some challenging, and some pains me to share.

 

I’m back in Roger Payne’s house in Vermont. The snow has melted in the woods, the grey trees suddenly sprouting green tips. A valley away, Roger’s son John has been at the swamp edges, recording the overwhelming choruses of frogs, which the locals call “peepers.” John is also a biologist, brought up on the shores of Patagonia. As we played the audio of the frogs to Roger, John told how they had been so loud that as he got close he could feel his ears constricting to protect themselves. Roger laughed his deep, fulsome laugh. “Oh wonderful.”

 

Downstairs now, Roger is sleeping. There is cancer in him. He was given three to six months to live, and it has been three months. When I last visited in the winter, we walked around the lake together. Now he is in bed and he will never leave it. True to form, he has decided that no one is to be sad. What is the point, he says, of moping, it just takes up time that he and Lisa would rather spend in other ways. So meals are taken together around his bed (now situated just off the kitchen), where Roger raises and lowers himself with a remote to greet friends and family arriving from across the globe. He breaks the news to each that his life is ending.

 

Of course, amidst the laughter there are also tears. I had not realiSed how many people he had touched, how many others had Roger as an extra father figure in their lives. Lisa told me that every time David Gruber, Roger’s friend and colleague from Project CETI, visited he would leave Roger’s room beaming, but once out the room would grieve his fading with her and they would weep. I was to spend six days with Roger. He had been at the centre of so much of my work and so much of the world’s understanding of whales. I wanted to discuss the latest news and give him the last word.

 

And there are some important developments to take stock of.

CLOSER TO WHALE SPEAK

Before the long journey up to Roger’s house I had visited David Gruber in Boston. We sat on the harbour-side by the Aquarium and talked about jellyfish, his first love. He got out his laptop to show me how Project CETI, the massive endeavour to decode sperm whale communication, was going. David was cautious, and at first seemed almost embarrassed. There had been a setback—though the array had weathered tropical storms, the movement of the unexpectedly strong currents in the uncharted depths off Dominica had caused the thick cabling of the listening station to rub against its base. After a month it had sheared in two. David was so pained as he told me this, it was almost as if a person had been lost at sea. It made me think of the Gemini space program, and how many rockets had malfunctioned or exploded before a mission had been completed. Making complex things work in new places rarely works straight out the gate. The loss of the first array was unfortunate and costly, but the others had been modified to withstand these forces.

 

On everything else, David was enthusiastic. He showed me videos of drones dropping multidirectional hydrophone arrays built into small, soft tags onto the backs of whales. The soft suckers they’d developed, modelled on those of suckerfish, have proven surprisingly resilient—staying on the whales’ bodies for many times longer than any previous tags, even under extreme pressures while the whales dove. He played me a drone video of a pod of whales interacting near the surface, synched up to their different voices recorded underwater. Now I could watch as whales vocalised, and see others change course and swim over to them, then communicating in turn. A ‘God-Mode’ view of whale conversations, like observing characters interact in The Sims. I realised I’d never before seen a video of any animal interactions underwater that I could also listen to. Our marine movies have always been silent.

 

On the whole, CETI’s whale listening was working. In a single month, the array had recorded more than double the examples of sperm “whalespeak”than had ever been recorded before. With this to play with, the team used AI to separate out the voices of different whales, even when clustered closely together, which allowed them to track the dynamics of their vocalisations. They found that the whales appear to take turns to talk (as we do) rather than chorusing; they listen to what each other is saying before responding. This means that the whales might be having “conversations”—exchanging meaningful information. Because the tags stay put even while the massive animals dive over a kilometre, they’ve been able to observe the way whales fall silent while hunting, then chatter together as they return to the surface.

 

Furthermore, CETI also believe they may have decoded their first sperm whale “word”—the vocal signal that the whales use to initiate a dive. Perhaps most compelling of all, a team of their AI scientists led by Pratyusha Sharma believe they may have even been able to lay out the entire sperm whale phonetic alphabet. Their early analysis shows that the way we currently look at sperm whale communications, as being made up of combinations of thirty-odd types of ‘codas’, is far too crude. They think that within the codas they’ve found smaller and far more varied units. In human natural language this is a key feature. We combine small, meaningless units (phonemes) into near-infinite larger meaningful units(morphemes/words), allowing us enormous descriptive breadth and flexibility.Th e analysis from CETI so far could place sperm whales linguistically closer to humans than any animal before.

 

As well as this, through the listening, CETI is noticing the importance of other non-click sounds that the whales make (grunts and other noises). From early tests, these are finding complex patterns, which would not be expected if the whales’ communications were simple and non-language-like. By the time you read this, the full listening array should be in place, and after three to five years time this should have provided upwards of 4 billion recordings—a dataset large enough for them to properly deploy the most powerful ML language pattern tools to begin work. When we met, this was all going through peer review. Perhaps, David warned me, they were wrong—there could be a fault in the analysis; their hunches could be off. But man, was it exciting.

 

We’d gone in so deep that the sun had almost set and we had to sprint downtown to catch my bus onwards to Vermont. I looked out wild-eyed on the landscape of New England, my thoughts in distant depths and clicking conversations. One morning later that week, after Lisa had made Roger breakfast and tended to him, we sat at his bedside as David phoned from Vancouver. “One of our risks was that the whales could be incredibly boring,” David said, “but at least we are de-risking that!” Roger guff awed. Later he admitted that initially he had been sceptical ofthe decision to go for sperm whales rather than humpbacks, who he thought might have more to say. But he’d changed his mind. “My impression is the whales are really doing something fancy,” he said. I asked how he felt to be missing out on these wild times. Landlocked in Vermont, no longer to return to the sea and its voices. “If I don’t live to see it, it will be totally frustrating.” He said, “When you see something starting in the way that this has started, you would be crazy not to be excited by the possibilities.”

 

AI IS A THING NOW - AND SOME IS CREEPY

When I sat down to write this book, one of the main challenges I faced was that people didn’t know what AI was, and I didn’t think they’d believe that it could help us do hard things, like translate what whales are saying. No longer. AI systems are part of our lives now. In August 2020, during the pandemic, the UK government was unable to gather students for exams. Instead, they decided to predict their grades using an algorithm. The process was a catastrophe, more than a third of students were downgraded from the results their teachers had predicted, and with this their life chances reduced. Thousands protested, taking to the streets against this inscrutable computational interference in their lives. “F*CK THE ALGORITHM”read one banner. The government caved in and reversed the decision.

 

But perhaps more disruptive than AI that performs badly is AI that works too well. In my own industry, wildlife film, humans are being replaced by machines everywhere. It took me years to learn how to expose shots and develop the muscle memory to follow fast-moving subjects like bouncing kangaroos and swooping birds on a long lens, all while keeping them in focus. Now my camera recognises human and animal faces and crisply tracks them for me. It does it so well that I have in some circumstances(grudgingly, then casually) begun to let it take over. While I filmed with Roger, AI tools stabilised the wobbles and bled just enough neutral density filter across the aperture to expose the image perfectly. These were once specialist human jobs. Now they come built into the camera package.

 

Recently AI tools were able to fool the judges in a nature photography competition, and won a landscape portrait prize with an artificial scene, regurgitated from digesting millions of real ones. It fooled me. AI language tools can pass the bar exam and write poetry. We’ve trained machines to beat us on all our old war games (Chess and Go), and then our new ones (Warcraft, Starcraft), and now even our fighter pilots have been bested by AI on military simulators. We’ve used AI systems in predicting new pandemic pathogens and the design of nuclear reactors. We have connected some models to as much human knowledge as we can gather, including discussions of how machines might overthrow and destroy us. What could go wrong?

 

While Roger slept, I read Twitter. A fierce argument was raging about Artificial General Intelligence (AGI); future computer systems that might perform all intellectual tasks better than humans. These, it was suggested, could simply improve themselves beyond our control, and then, intentionally or not, kill us all. Some argued that there was no oversight and the corporations developing these godlike powers were racing each other to our doom. Others said that this was sci-fi; that AGI was centuries away and we could always pull the plug or tell it what to do. Heads of state were getting involved.

 

I have tried to get my head around what AGI really means from a biological perspective, and it is unnerving. If you think of these machines as new “brains,” we have created them free from many of the limitations our biological brains face. They are not trapped in hard skulls, so their computational bodies can grow enormous and be added to at whim. They don’t sleep or get distracted by sex or insecurity or ego, so they can learn and practice tasks all the time. Like our brains, they require a lot of energy, but instead of glucose they feed off electricity—they do not need plants or animals to make and fuel themselves; they don’t even need the same atmosphere or a perfectly stable temperature. They “live” not in biological bodies but within the protective structures of corporate entities. More effective defences than venomous barbs or thick plates of biological armour; corporations have protective coatings of lawyers, and they operate in habitats where the law has been slow to catch up and impose curbs on their growth. Evolutionarily, when species with new adaptations can exploit totally new resources and environments in the absence of competition, parasites or predators, with a high growth rate, they spread rapidly and they upset the balance of the ecology they have invaded. As a close friend who works at the frontier of AI research sent me in a text: “We are making these systems more powerful faster than we are making them safe, so lots of risk we go extinct in the process.” And these non-biological “brains” have another edge. They don’t die.

 

ME ROBOT, YOU WHALE

So what risks can we perceive in the domain of machine learning and cetaceans? Newly able to communicate with other animals in new ways, how do we prevent human actors from exploiting them for terrible ends? And of course, there is the law of unintended consequences: unthinking of how disruptive it is for others, we have accidentally polluted the seas with noise and plastic and filled the night skies with light. Could we similarly pollute animal cultures by talking to them? We have a terrible record of“first contacts” among our own species. Aza Raskin, from Earth Species Project, recently addressing the World Economic Forum claimed it would soon be possible to deepfake whale-speak. In fact, he believed this may already have happened. ESP research partners used humpback whale contact calls (thought to potentially mean something like “hello” and also perhaps encode the whale’s “name”) and trained a language model to create new ones to play to the whales. It should be noted that we have been doing this sort of thing with other species for decades with “playback” experiments. From monkeys to birds to elephants to dolphins, scientists have been playing them back both their own and modified sounds to see what they do, to try and understand what meaning they might have.

 

That we’ve been doing something for a while doesn’t mean we should continue. There is an argument that these experiments could help the whales, for instance if we could figure out a message to play that would warn them of oncoming vessels. There is a world where the whales are ambivalent to the synthetic voices they are played. But to me this is an unnerving power. The difference today with AI is that the sounds we play to animals will likely be more realistic. Aza described how just as he could now build a chatbot in Chinese, without speaking any Chinese language, that would convince a Chinese person, he thought, “We will likely … [from the whales’ perspective] be able to pass the whale Turing test”—and convince a whale that they are listening to another whale, or themselves, speaking. If you were swimming through the sea and a new, uncanny voice-name started saying hello to you from boats, would it terrify you, intrigue you, drive you mad?  This is far from the world of speaking to animals imagined in Dr Doolittle. As Aza put it, “there is a sort of first contact moment that is about to happen but not in the way I think that we originally expected …”

 

The cultures in the sea have been around for a long time, possibly far longer than human cultures. They are already under great pressure. We sit at a crossroads. Just as we have discovered the existence of whale cultures and their fragility, we have a startling power. CRISPR is a gene-editing technology. One of ESP’s partner’s commented, “If we are not careful, we may have just invented a CRISPR of culture.” And rather than decoding whale-speak first and then deciding what to communicate, we can now communicate before we understand. Stepping back, perhaps playing semi-random AI-generated sounds at vulnerable animals with complex vocal cultures could be a bad idea? Should we stop all of this experimentation?

 

Since the book came out, I have realised how reductive most of our conversations about nature and technology have become, and how strongly  I do not want to feed into this. I don’t think tech is inherently good or bad for nature. I think it is very powerful, and capable of harm and help. It cannot go unacknowledged that many of the cruellest situations we’ve imposed on other animals are facilitated by bespoke machinery. Pigs in intensive agriculture systems—often kept in concrete skyscrapers—can now be surveilled by AI systems twenty-four hours a day, making sure they grow optimally. SCOTT Automation has a lamb-processing system that X-rays and laser reconstructs each carcass before cutting and deboning an entire lamb at a rate of twelve carcasses a minute. One lamb through the machine every five seconds. Bats have been used by the US military as living incendiary devices, dolphins trained to kill divers and lay mines.

 

Information is power, and information about animal communication systems can lead to great powers. Just as in the broader conversations on AI, these fast-developing tools can also promise help and hope. Roger Payne’s success is inseparable from the hydrophones, spectrograms and flexi-discs that made it possible. Discoveries we make with our ocean tools can motivate colossal change in the realms of conservation. The intentions behind technology matter, and unlike corporate-motivated CEOs, I believe scientists and conservationists are more likely and institutionally supported to pull back if they find their efforts are causing harm. The philosopher and technologist Jonathan Ledgard goes further. He believes it is vital that we steer AI to be invested in nature. “AI amplifies anthropocentrism.” He writes, “If it shows no curiosity for non-humans in this early stage of its evolution, it is less likely to be a steward of their interests or even to record their disappearance … Since wild animals, trees, birds, and other beings lack money and voice, there is every chance AI will be incurious of them at precisely the moment it should be paying attention.”

 

Of course, it would be a disaster to damage these cultures in the act of illuminating them, as the breath of awed explorers corroded ancient cave paintings and the oils of our fingertips can dissolve ancient manuscripts as we turn their pages. But to seal these animal cultures away from science looks unlikely to help save them, because their worlds are full of us already—only a tiny proportion of the disturbance in their lives comes from scientists.

 

So how do we navigate our way through what some are calling the Interspecies Age, as the revolution in tools gives us new opportunities for both harm and care? Do we walk away from this? Cetaceans already face existential threats and in some regards the genie is out of the bottle. One argument, and to me a powerful one, is that nature is taking such a hammering, we cannot only let the bad guys use the powerful new machines. In this view, eschewing certain AI tools to help serve conservation just because the tools were made by, for example, Facebook, could be like forswearing all boats because some pirates sail them. Software follows the orders it has been given, and there is a choice over how to wield these powers. Disengagement leaves the way clear for those without your scruples. Understanding what animals are saying, speaking to a whale, these sound perhaps silly, like children’s stories. But they are not silly. There are cultures in other species. We could learn from them and now soon perhaps communicate with them. They are fragile, unique, invaluable and not ours. Th e possibilities resulting from AI meeting nature are wildly different depending on how we choose to use these tools. I wrote this book so people would take this idea seriously, because I think it is deadly serious.

 

WHAT IF ANIMAL TRANSLATION WORKS

I believe now is the time for serious discussions about protecting nonhuman cultures and, more broadly, the digital rights of nature in the age of AI. Perhaps recent history can guide us. We have navigated ethical minefields laid by emerging life-science technologies before.

 

In 1982, when faced with developments in embryology such as IVF and other more controversial manipulations of human embryos outside of the womb, the UK Government ran a two-year inquiry chaired by the philosopher Lady Warnock and drawing on doctors, social workers, psychiatrists, neurologists, ministers, and public representatives to devise guidelines for how the country would approach the ethical conundrums these new technological powers created, such as paternity of IVF off spring, how long embryos could be frozen and until what age used for research, and the legality of surrogacy agencies. These became law.

 

In 1996, The Human Genome Project agreed (before work began) that they would make all genetic information discovered freely available to anyone, rather than patented and sold as their competitor threatened to. Not only this but they would do so before they had completed the sequencing, within twenty-four hours. Like Warnock’s inquiry, I suggest we seek broad input, convening philosophers, scientists, policy makers, and perhaps human representatives to stand for the other species, as human lawyers are appointed to represent clients who cannot speak, such as the very young. Like the HGP we could oblige research to be open and results available to all and not for sale. We could adopt international codes of practice, and adapt them according to what we discover. I think this is urgent, that we must do this before we discover the worst harms by blundering into them and see how bad actors might act by giving them free rein.

 

The creators of many of the AI tools we have today have been private companies. But as the philosopher James Bridle writes, “intelligence is a poor thing when it is imagined by corporations.” I am very persuaded by James' argument that we should not let our visions of how these tools are used be determined and restricted only by the entities that have created them. If we seek to speak to the whales, should we first agree on how we approach this with a plan to minimise risk, with consensus on what we might seek to discuss? Should there be a moratorium on private/ for-profit communication studies. Or would this stifle routes to understanding other species? Who represents them in our culture? Should the UN host representatives of the species we could speak to? How can we seek consent for contact? Where will all the data go? Should the natural history museums of the world also host repositories of digital life on earth as well as bones and skins? Can any corporation, university, or individual entity own the voices of whales and other animals? How could we explain to whales that their IP has been ring-fenced on the land?

 

I asked David Gruber what he felt about this, as CETI took its first steps in this new terrain. “It is important to consider who is doing the research and why,” he told me. “For CETI, it is about repeatedly asking if this effort is in service of the whales; how it may deepen our connection and better our stewardship of life in the ocean.” Could his study be a template for how we approach Interspecies work in general? Perhaps as we ponder speaking to other animals, we can be guided by an old human custom: listen first, talk later.

 

 

HELLO AGAIN, OLD FRIEND

A couple of months before I visited Roger, I had a message from Ted Cheeseman of Happywhale. They’d made a new version of the whale identification software, one that worked instantly. Jorge Urban and his team were testing the prototype in the Pacific humpback whale breeding grounds off Mexico. One day they came across three humpback whales. They used the new app. It told them that of all the thousands of whales in the ocean, one of the three in front of them was my old friend: CRC12564. Prime Suspect. Whales moult their skin very often, and when Prime Suspect sloughed a little off near their boat, they managed to scoop up enough to run a DNA analysis. The results arrived just before we went to press, and I can confirm that Prime Suspect is … a male!

 

They also had a GPS tag with them, No.PTT849, which they gently attached to him. From London, through the bleak, grey winter, I followed the tropical sea wanderings of the giant wild animal that seven years ago had jumped on top of me. Prime Suspect headed out of the Bahia de Banderas and travelled south along the outer coast of Jalisco state, then turned and swam north along Nayarit, passing Yelapa and San Blas where, according to Dr Daniel Palacios, he spent the rest of this time “milling around” before the tag stopped working.

 

Currently Happywhale only works to identify individual humpback whales using photos of their tail flukes; now Ted and the Happywhale team are working on the more complicated task of identifying whales from photos of any body part and across many different species. To make this compelling for humans to engage with, he also plans to use generative text AI tools like ChatGPT to turn all the data into a written life story of each whale you encounter. My dream is to link Happywhale photo IDs from whales seen on the surface to the audio being recorded of their vocalisations by hydrophones on the seabed. Then you could see a whale, take a photo, immediately learn who they are, and then listen to them communicating and singing. You could learn their name in humpback-speak. Because each whale has a unique voice, you could then find your whales voice in the decades of audio recordings we’ve made across whole oceans, and go back to listen to them through time. If the animal translation tools of CETI work, you could then find out what they had been saying.

In just a few years I have learned so much about this one whale that leapt onto Charlotte and me. Imagine what these tools will mean as they are used across the world for other animals, like garden birds. When you look out your window and see a starling or on a late walk hear a nightingale sing, to know not just what they are but who they are. To learn where they’ve been, to compare their migration time and song to those of others. To see them not as species, as kinds of living things, but as individuals with personalities. To see them as persons.

 

At the end of this long journey, I wonder if I should have changed the title of my book. Instead of How to Speak Whale, perhaps it should be How to Listen Whale. I think of what Joy Reidenberg said to me, when Prime Suspect’s humpback life first collided with mine: “You can’t just ask a whale, why did you do that thing?” Well now I can find him, and perhaps soon I could ask him. But as I’ve got to know more about him and his kind, I find myself less interested in what he did to me and why. What a human thing to want to ask about a story where I am the protagonist! I suspect he gave me very little thought at all. Instead, I just want to know him better.

 

If I met this whale again and I could ask him something, with language no barrier, I think I’d want to know what is most important to him. To ask him to show me the wonders of a sea for a humpback whale, to help me understand what he cares about. Because the more we listen and observe, the stranger our impression of what cetaceans care about becomes.

 

Right now the orcas off Gibraltar have been smashing up the rudders of sailboats. This has spread in their pods and across the ocean; some have succeeded in sinking a ship and disabling dozens; now they are attacking boats off the north of Scotland. Scientists think they are teaching each other how to do this, and they have no explanation for why. Could it be revenge? Do they see the boats as threats? Luke Rendell suggests it is a fad in their culture. This isn’t new. In 1987 a pod of orcas in the Pacific Northwest started wearing salmon as hats, one matriarch carrying around a fish on her head for days. The craze caught on and soon two other pods were following suit. After some weeks they abruptly stopped.Why? At the time of writing other killer whales are gathering together in unprecedented groups of many different matrilines and clans off California and we can’t explain it.

 

In 1987 Russian sailors on an icebreaker freed two thousand trapped belugas by playing them classical music in order to guide them to safety. Why did this work? A century ago, Pelorus Jack, a Risso’s dolphin, spent twenty-four years guiding vessels across the dangerous Cook Strait. The dolphin would take each in turn, for twenty minutes. Waiting sailors would hold their vessels back for Jack to return before leading them safely through the deadly waters. What made this animal do this? Right now, somewhere, dolphins are surfing. Sperm whales in the Azores have “adopted” a bottlenose dolphin with a spinal deformity. What drives these actions? Killer whales in Iceland have adopted, or abducted, a baby pilot whale. Others off San Juan Island, mammal hunters, have been seen swimming with a deer. In Tangalooma, Australia, dolphins are placing “gifts” for humans on the shore. As I type, fin whale voices are penetrating the earth’s crust, and mother Gray whales are whispering to their young, and bowhead whales are being born to sing their ancient songs. What is carried in their voices? Could we hope to understand these diverse, strange, and vanishing cultures?

 

These questions are fascinating to me, and it is so strange how little money and attention we have given to them. The budget for the Large Hadron Collider at CERN in 2022 was €1.2 billion; the James Webb space telescope was about $10 billion. Research into animal communication has never had funds of these orders of magnitude. With respect to theoretical subatomic particles and distant supernovae, they exist all over the universe and none are going extinct right now.

 

And for Roger, this is about much more than whales. It is about saving life—ours, and the rest of it. While I visited, he’d just finished a final article for Time magazine. In this he looked back at the history of science and concluded that we had already discovered the most consequential insight of all, but that it had not registered with us yet. “It is this,” he wrote:

 

 “Every species, including humans, depends on a suite of other species to keep the world habitable for it.” We have discovered a few of these species, have noticed enough to give them Latin names, but have little deep knowledge of how they live and work and interact. Th e great hurdle we faced in our survival was not technological but emotional. It was “to figure out how to motivate ourselves and our fellow humans to make species preservation our highest calling.” Otherwise, our inability to grasp this fact would kill us all,“graveyard dead”. It was time for us “to once again listen to the whales—and, this time, to do it with every bit of empathy and ingenuity we can muster.”

 

Looking at Roger’s story today, it seems a done deal now that we saved the whales by listening to them fifty years ago. But remember this: it seemed insane when Roger set out. They were doomed; no one cared. Who would have bet on him? He told me he had no idea if it would work, but he had to try. And trying made him feel better. I asked him about the future. This was a bleak moment.

 

He felt, he said, that his grandchildren would live through “an ever-decreasing world. And that breaks my heart, because I won’t be there to help.” Despite this, he drew some hope from an unexpected source: human nature. That when people realise something new, when we feel connections to others, when we change our minds they “can change so fast that you just can’t keep up with it.” And few have witnessed this more than he.

Before I had left him I had asked for some advice. “How do you listen?” I asked Roger.

 

“Attentively,” he said. "You should listen silently. You should listen with nothing else to distract you. With a completely open mind. That’s the way to listen.” Later, as my coach drove me through Vermont and away from this frail and failing force of nature, over rivers swollen with suddenly melted water, I brought my thoughts together to think of what he had told me.

 

This what I wrote:

 

Do not give up on human nature. Work with it to forge emotional connections to the more-than-human-world. Go to the sea, be curious, try your damnedest. Rejoice along the way.

 

This is what he did, this is what we can do.

 

Goodbye, my friend.

 

 

In memory and tribute to Roger Searle Payne, 1935–2023

bottom of page