Arms manufacturer Raytheon fined almost $ 1 billion in the United States

By Alimat Aliyeva
Raytheon, which is the “daughter” of the largest contractor of
the US Department of Defense, Raytheon Technologies Corporation
(RTX), will pay more than $ 950 million in fines in a fraud and
corruption case, Azernews reports.
The US authorities accused Raytheon of overcharging for
government contracts, billing the US Department of Defense with
double bills and illegally exporting weapons abroad.
It is noted that the company violated the provisions of the Law
on Combating Corruption Abroad (FCPA), the Law on Arms Export
Control (AECA), as well as the Rules of the International Arms
Trade (ITAR). In particular, the company participated in bribing an
official in Qatar and conspiring to export weapons abroad and
provided false information to the US Department of Defense during
two transactions. The first of them concerned the supply of Patriot
air defense systems, the second — the support of the radar
station.
We are talking about numerous contracts concluded between 2009
and 2020. At a court hearing in New York, RTX representatives
pleaded not guilty to two of the three charges.

Follow us on Twitter @AzerNewsAz

QualityGrader, expanding global reach and technological capabilities

Flikweert Vision celebrates an impressive milestone with the production of the 200th QualityGrader, just three years after the first machine was launched on the market. What began as an innovation for the optical sorting of unwashed potatoes has evolved into a versatile solution that can now also sort onions and washed potatoes. The QualityGrader has made its way to companies worldwide, from the United States to New Zealand, and the demand for this advanced technology continues to grow. The development of the QualityGraderIn 2021, Flikweert Vision launched the first QualityGrader for sorting unwashed potatoes, revolutionizing the Dutch seed potato sector. Two years later, the first QualityGrader for onions was sold. This also proved to be a great success, as a large portion of Dutch onion processors now rely on the QualityGrader. Since last year, the QualityGrader has crossed borders, finding its way to countries such as Germany, France, and the United Kingdom, as well as further afield in the USA, Canada, and Australia. This success can be attributed to the continuous development of the software. These improvements are also available for machines already installed, meaning the first machines run on the same software as QualityGrader No. 200. However, the advancements are not limited to software alone. Three months ago, Flikweert Vision introduced two hardware upgrade options for existing and new QualityGraders. The number of cameras in the machine was doubled from three to six, allowing each product to be inspected from two different camera angles. This made the QualityGrader even more precise, particularly in detecting small defects. In addition to the extra cameras, the QualityGrader can now be equipped with increased computing power for higher capacity. As a result, the QualityGrader can now sort up to 20 tons per hour, reaffirming its position as a leading solution in the international market. 4 QualityGraders for Albert Elligsen GmbHThe 200th QualityGrader is destined for Albert Elligsen GmbH, a German packaging company that decided to invest in four QualityGraders this summer. The family-owned business, founded in 1931 and now run by the third generation, Dirk Elligsen, specializes in packaging washed table potatoes and onions. Elligsen remains a leader in sustainable innovations. “I was looking for a precise machine that could reduce labor hours. Additionally, the machine needed to switch between potatoes and onions with just the push of a button. After visiting several optical sorting machines in action, I have full confidence in the performance of the QualityGraders,” said Dirk Elligsen. The machines will be installed in early November in collaboration with DT Dijkstra. Collaboration with DT Dijkstra”It’s great to collaborate with the builder of the current sorting line, DT Dijkstra,” says Thijs van der Torren, Flikweert Vision’s representative in Germany. DT Dijkstra created a design for Elligsen that provides great flexibility from the potato storage bunkers to the packaging lines, and optimally integrates the 4 QualityGraders. For more information:Lisanne BogaardFlikweert VisionTel: +316 15 69 45 17Mail: [email protected]www.flikweertvision.nl/

The Coming Tech Autocracy

Reviewed:

AI Needs You: How We Can Change AI’s Future and Save Our Own
by Verity Harding
Princeton University Press, 274 pp., $24.95

Taming Silicon Valley: How We Can Ensure That AI Works for Us
by Gary Marcus
MIT Press, 235 pp., $18.95 (paper)

The Mind’s Mirror: Risk and Reward in the Age of AI
by Daniela Rus and Gregory Mone
Norton, 280 pp., $29.99

Code Dependent: Living in the Shadow of AI
by Madhumita Murgia
Henry Holt, 311 pp., $29.99

To understand why a number of Silicon Valley tech moguls are supporting Donald Trump’s third presidential campaign after shunning him in 2016 and 2020, look no further than chapter three, bullet point five, of this year’s Republican platform. It starts with the party’s position on cryptocurrency, that ephemeral digital creation that facilitates money laundering, cybercrime, and illicit gun sales while greatly taxing energy and water resources:
Republicans will end Democrats’ unlawful and unAmerican Crypto crackdown and oppose the creation of a Central Bank Digital Currency. We will defend the right to mine Bitcoin, and ensure every American has the right to self-custody of their Digital Assets, and transact free from Government Surveillance and Control.
The platform then pivots to artificial intelligence, the technology that brings us deepfake videos, voice cloning, and a special kind of misinformation that goes by the euphemistic term “hallucination,” as if the AI happened to accidentally swallow a tab of LSD:
We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI development rooted in Free Speech and Human Flourishing.
According to the venture capitalist Ben Horowitz, who, along with his business partner Marc Andreessen, is all in for the former president,1 Trump wrote those words himself. (This may explain the random capitalizations.) As we were reminded a few months ago when Trump requested a billion dollars from the oil and gas industry in exchange for favorable energy policies once he is in office again, much of American politics is transactional. Horowitz and Andreessen are, by their own account, the biggest crypto investors in the world, and their VC firm, Andreessen Horowitz, holds two AI funds worth billions. It makes sense, then, that they and others who are investing in or building these nascent industries would support a craven, felonious autocrat; the return on investment promises to be substantial. Or, as Andreessen wrote last year in a five-thousand-plus-word ramble through his neoreactionary animating philosophy called “The Techno-Optimist Manifesto,” “Willing buyer meets willing seller, a price is struck, both sides benefit from the exchange or it doesn’t happen.”2The fact is, more than 70 percent of Americans, both Democrat and Republican, favor the establishment of standards to test and ensure the safety of artificial intelligence, according to a survey conducted by the analytics consultancy Ipsos last November. An earlier Ipsos poll found that 83 percent “do not trust the companies developing AI systems to do so responsibly,” a view that was also held across the political spectrum. Even so, as shown by both the Republican platform and a reported draft executive order on AI prepared by Trump advisers that would require an immediate review of “unnecessary and burdensome regulations,” public concern is no match for corporate dollars.Perhaps to justify this disregard, Trump and his advisers are keen to blame China. “Look, AI is very scary, but we absolutely have to win, because if we don’t win, then China wins, and that is a very bad world,” Trump told Horowitz and Andreessen. (They agreed.) Pitching AI as “a new geopolitical battlefield that must somehow be ‘won,’” to quote Verity Harding, the former head of public policy at Google DeepMind, has become a convenient pretext for its unfettered development. Harding’s new book, AI Needs You, an eminently readable examination of the debates over earlier transformational technologies and their resolutions, suggests—perhaps a bit too hopefully—that it doesn’t have to be this way.Artificial intelligence, a broad category of computer programs that automate tasks that might otherwise require human cognition, is not new. It has been used for years to recommend films on Netflix, filter spam e-mails, scan medical images for cancer, play chess, and translate languages, among many other things, with relatively little public or political interest. That changed in November 2022, when OpenAI, a company that started as a nonprofit committed to developing AI for the common good, released ChatGPT, an application powered by the company’s large language model. It and subsequent generative AI platforms, with their seemingly magical abilities to compose poetry, pass the bar exam, create pictures from words, and write code, captured the public imagination and ushered in a technological revolution.It quickly became clear, though, that the magic of generative AI could also be used to practice the dark arts: with the right prompt, it could explain how to make a bomb, launch a phishing attack, or impersonate a president. And it can be wildly yet confidently inaccurate, pumping out invented facts that seem plausibly like the real thing, as well as perpetuating stereotypes and reinforcing social and political biases. Generative AI is trained on enormous amounts of data—the early models were essentially fed the entire Internet—including copyrighted material that was appropriated without consent. That is bad enough and has led to a number of lawsuits, but, worse, once material is incorporated into a foundational model, the model can be prompted to write or draw “in the style of” someone, diluting the original creator’s value in the marketplace. When, in May 2023, the Writers Guild of America went on strike, in part to restrict the use of AI-generated scripts, and was joined by the Screen Actors Guild two months later, it was a blatant warning to the rest of us that generative AI was going to change all manner of work, including creative work that might have seemed immune from automation because it is so fundamentally human and idiosyncratic.It also became apparent that generative AI is going to be extremely lucrative, not only for the billionaires of Silicon Valley, whose wealth has already more than doubled since Trump’s 2017 tax cuts, but for the overall economy, potentially surpassing the economic impact of the Internet itself. By one account, AI will add close to $16 trillion to the global economy by 2030. OpenAI, having shed its early idealism, is, by the latest accounting, valued at $157 billion. Anthropic, a rival company founded by OpenAI alumni, is in talks to increase its valuation to $40 billion. (Amazon is an investor.) Meta, Google, and Microsoft, too, have their own AI chatbots, and Apple recently integrated AI into its newest phones. As the cognitive scientist Gary Marcus proclaims in his short but mighty broadside Taming Silicon Valley: How We Can Ensure That AI Works for Us, after ChatGPT was released, “almost overnight AI went from a research project to potential cash cow.”Arguably, artificial intelligence’s most immediate economic effect, and the most obvious reason it is projected to add trillions to the global economy, is that it will reduce or replace human labor. While it will take time for AI agents to be cheaper than human workers (because the cost of training AI is currently so high), a recent survey of chief financial officers conducted by researchers at Duke University and the Federal Reserve found that more than 60 percent of US companies plan to use AI to automate tasks currently done by people. In a study of 750 business leaders, 37 percent said AI technology had replaced some of their workers in 2023, and 44 percent reported that they expected to lay off employees this year due to AI. But in the MIT computer scientist Daniela Rus’s new book, The Mind’s Mirror: Risk and Reward in the Age of AI, written with Gregory Mone, she offers a largely sunny take on the digital future:
The long-term impact of automation on job loss is extremely difficult to predict, but we do know that AI does not automate jobs. AI and machine learning automate tasks—and not every task, either.
This is a semantic feint: tasks are what jobs are made of. Goldman Sachs estimates that 300 million jobs globally will be lost or degraded by artificial intelligence.What does degraded mean? Contrary to Rus, who believes that technologies such as ChatGPT “will not eliminate writing as an occupation, yet they will undoubtedly alter many writing jobs,” consider the case of Olivia Lipkin, a twenty-five-year-old copywriter at a San Francisco tech start-up. As she told The Washington Post, her assignments dropped off after the release of ChatGPT, and managers began referring to her as “Olivia/ChatGPT.” Eventually her job was eliminated because, as noted in her company’s internal Slack messages, using the bot was cheaper than paying a writer. “I was actually out of a job because of AI,” she said.Lipkin is one person, but she represents a trend that has only just begun to gather steam. The outplacement firm Challenger, Gray and Christmas found that nearly four thousand US jobs were lost to AI in May 2023 alone. In many cases, workers are now training the technology that will replace them, either inadvertently, by modeling a given task—i.e., writing ad copy that the machine eventually mimics—or explicitly, by teaching the AI to see patterns, recognize objects, or flag the words, concepts, and images that the tech companies have determined to be off-limits.In Code Dependent: Living in the Shadow of AI, the journalist Madhumita Murgia documents numerous cases of people, primarily in the Global South, whose “work couches a badly kept secret about so-called artificial intelligence systems—that the technology does not ‘learn’ independently, and it needs humans, millions of them, to power it.” They include displaced Syrian doctors who are training AI to recognize prostate cancer, college graduates in Venezuela labeling fashion items for e-commerce sites, and young people in Kenya who spend hours each day poring over photographs, identifying the many objects that an autonomous car might encounter. Eventually the AI itself will be able to find the patterns in the prostate cancer scans and spot the difference between a stop sign and a yield sign, and the humans will be left behind.And then there is the other kind of degradation, the kind that subjects workers to horrific content in order to train artificial intelligence to recognize and reject it. At a facility in Kenya, Murgia found workers subcontracted by Meta who spend their days watching “bodies dismembered from drone attacks, child pornography, bestiality, necrophilia and suicides, filtering them out so that we don’t have to.” “I later discovered that many of them had nightmares for months and years,” she writes: “Some were on antidepressants, others had drifted away from their families, unable to bear being near their own children any longer.” The same kind of work was being done elsewhere for OpenAI. In some of these cases, workers are required to sign agreements that absolve the tech companies of responsibility for any mental health issues that arise in the course of their employment and forbid them from talking to anyone, including family members, about the work they do.It may be some consolation that tech companies are trying to keep the most toxic material out of their AI systems. But they have not prevented bad actors from using generative AI to inject venomous content into the public square. Deepfake technology, which can replace a person in an existing image with someone else’s likeness or clone a person’s voice, is already being used to create political propaganda. Recently the Trump patron Elon Musk posted on X, the social media site he owns, a manipulated video of Kamala Harris saying things she never said, without any indication that it was fake. Similarly, in the aftermath of Hurricane Helene, a doctored photo of Trump knee-deep in the floodwaters went viral. (The picture first appeared on Threads and was flagged by Meta as fake.) While deepfake technology can also be used for legitimate reasons, such as to create a cute Pepsi ad that Rus writes about, it has been used primarily to make nonconsensual pornography: of all the deepfakes found online in 2023, 98 percent were porn, and 99 percent of those depicted were women.For the most part, those who do not give permission for their likenesses to be used in AI-generated porn have no legal recourse in US courts. Though thirteen states currently have laws penalizing the creation or dissemination of sexually explicit deepfakes, there are no federal laws prohibiting the creation or consumption of nonconsensual pornography (unless it involves children). Section 230 of the Communications Decency Act, which has shielded social media companies from liability for what is published on their platforms, may also provide cover for AI companies whose technology is used to create this material.3 The European Union’s AI Act, which was passed in the spring, has the most nuanced rules to curb malicious AI-generated content. But, as Murgia points out, trying to get nonconsensual images and videos removed from the Internet is nearly impossible.The EU AI Act is the most comprehensive legislation to address some of the more egregious harms of artificial intelligence. The European Commission first began exploring the possibility of regulating AI in the spring of 2021, and it took three years, scores of amendments, public comments, and vetting by numerous committees to get it passed. The act was almost derailed by lobbyists working on behalf of OpenAI, Microsoft, Google, and other tech companies, who spent more than 100 million euros in a single year trying to persuade the EU to make the regulations voluntary rather than mandatory. When that didn’t work, Sam Altman, the CEO of OpenAI, who has claimed numerous times that he would like governments to regulate AI, threatened to pull the company’s operations from Europe because he found the draft law too onerous. He did not follow through, but Altman’s threat was a billboard-size announcement of the power that the tech companies now wield. As the political scientist Ian Bremmer warned in a 2023 TED Talk, the next global superpower may well be those who run the big tech companies:
These technology titans are not just men worth 50 or 100 billion dollars or more. They are increasingly the most powerful people on the planet, with influence over our futures. And we need to know: Are they going to act accountably as they release new and powerful artificial intelligence?
It’s a crucial question.So far, tech companies have been resisting government-imposed guidelines and regulations, arguing instead for extrajudicial, voluntary rules. To support this position, they have trotted out the age-old canard that regulation stifles innovation and relied on conservative pundits like James Pethokoukis, a senior fellow at the American Enterprise Institute, for backup. The real “danger around AI is that overeager Washington policymakers will rush to regulate a fast-evolving technology,” Pethokoukis wrote in an editorial in the New York Post.
We shouldn’t risk slowing a technology with vast potential to make America richer, healthier, more militarily secure, and more capable of dealing with problems such as climate change and future pandemics.
The tech companies are hedging their bets by engaging in a multipronged effort of regulatory capture. According to Politico,
an organization backed by Silicon Valley billionaires and tied to leading artificial intelligence firms is funding the salaries of more than a dozen AI fellows in key congressional offices, across federal agencies and at influential think tanks.
If they succeed, the fox will not only be guarding the henhouse—the fox will have convinced legislators that this will increase the hens’ productivity.Another common antiregulation stance masquerading as its opposite is the assertion—like the one made by Michael Schwarz, Microsoft’s chief economist, at last year’s World Economic Forum Growth Summit—that “we shouldn’t regulate AI until we see some meaningful harm that is actually happening.” (A more bizarre variant of this was articulated by Marc Andreessen on an episode of the podcast The Ben and Marc Show, when he said that he and Horowitz are not against regulation but believe it “should happen at the application level, not at the technology level…because to regulate AI at the technology level, then you’re regulating math.”) Those harms are already evident, of course, from AI-generated deepfakes to algorithmic bias to the proliferation of misinformation and cybercrime.Murgia writes about an AI algorithm used by police in the Netherlands that identifies children who may, in the future, commit a crime; another whose seemingly neutral dataset led to more health care for whites than Blacks because it used how much a person paid for health care as a proxy for their health care needs; and an AI-guided drone system deployed by the United States in Yemen that determined which people to kill based on certain predetermined patterns of behavior, not on their confirmed identities. Predictive systems, whose parameters are concealed by proprietary algorithms, are being used in an increasing number of industries, as well as by law enforcement and government agencies and throughout the criminal justice system. Typically, when machines decide to deny parole, reject an application for government benefits, or toss out the résumé of a job seeker, the rebuffed party has few, if any, remedies: How can they appeal to a machine that will always give them the same answer?There are also very real, immediate environmental harms from AI. Large language models have colossal carbon footprints. By one estimate, the carbon emissions resulting from the training of GPT-3 were the equivalent of those from a car driving the 435,000 or so miles to the moon and back, while for GPT-4 the footprint was three hundred times that. Rus cites a 2023 projection that if Google were to swap out its current search engine for a large language model, the company’s “total electricity consumption would skyrocket, rivaling the energy appetite of a country like Ireland.” Rus also points out that the amount of water needed to cool the computers used to train these models, as well as their data centers, is enormous. According to one study, it takes between 700,000 and two million liters of fresh water just to train a large language model, let alone deploy it. Another study estimates that a large data center requires between one million and five million gallons of water a day, or what’s used by a city of 10,000 to 50,000 people.Microsoft, which has already integrated its AI chatbot, Copilot, into many of its business and productivity products, is looking to small modular nuclear reactors to power its AI ambitions. It’s a long shot. No Western nation has begun building any of these small reactors, and in the US only one company has had its design approved, at a cost of $500 million. To come full circle, Microsoft is training an LLM on documents relating to the licensing of nuclear power plants, in an effort to expedite the regulatory process. Not surprisingly, there is already opposition in communities where these new nuclear plants may be located. In the meantime, Microsoft has signed a deal with the operators of the Three Mile Island nuclear plant to bring the part of the facility that did not melt down in 1979 back online by 2028. Microsoft will purchase all of the energy created there for twenty years.No doubt Gary Marcus would applaud the EU AI Act and other attempts to hold the big tech companies to account, since he wrote his book as a call to action. “We can’t realistically expect that those who hope to get rich from AI are going to have the interests of the rest of us close at heart,” he writes. “We can’t count on governments driven by campaign finance contributions to push back. The only chance at all is for the rest of us to speak up, really loudly.”Marcus details the demands that citizens should make of their governments and the tech companies. They include transparency on how AI systems work; compensation for individuals if their data is used to train LLMs and the right to consent to this use; and the ability to hold tech companies liable for the harms they cause by eliminating Section 230, imposing cash penalties, and passing stricter product liability laws, among other things. Marcus also suggests—as does Rus—that a new, AI-specific federal agency, akin to the FDA, the FCC, or the FTC, might provide the most robust oversight. As he told the Senate when he testified in May 2023:
The number of risks is large. The amount of information to keep up on is so much…. AI is going to be such a large part of our future and is so complicated and moving so fast…[we should consider having] an agency whose full-time job is to do this.
It’s a fine idea, and one that a Republican president who is committed to decimating the so-called administrative state would surely never implement. And after the Supreme Court’s recent decision overturning the Chevron doctrine, Democratic presidents who try to create a new federal agency—at least one with teeth—will likely find the effort hamstrung by conservative jurists. That doctrine, established by the Court’s 1984 decision in Chevron v. Natural Resources Defense Council, granted federal agencies the power to use their expertise to interpret congressional legislation. As a consequence, it gave the agencies and their nonpartisan civil servants considerable leeway in applying laws and making policy decisions. The June decision reverses this. In the words of David Doniger, one of the NRDC lawyers who argued the original Chevron case, “The net effect will be to weaken our government’s ability to meet the real problems the world is throwing at us.”A functional government, committed to safeguarding its citizens, might be keen to create a regulatory agency or pass comprehensive legislation, but we in the United States do not have such a government. In light of congressional dithering,4 regulatory capture, and a politicized judiciary, pundits and scholars have proposed other ways to ensure safe AI. Harding suggests that the Internet Corporation for Assigned Names and Numbers (ICANN), the international, nongovernmental group responsible for maintaining the Internet’s core functions, might be a possible model for international governance of AI. While it’s not a perfect fit, especially because AI assets are owned by private companies, and it would not have the enforcement mechanism of a government, a community-run body might be able, at least, to determine “the kinds of rules of the road that AI will need to adhere to in order to protect the future.”In a similar vein, Marcus proposes the creation of something like the International Atomic Energy Agency or the International Civil Aviation Organization but notes that “we can’t really expect international AI governance to work until we get national AI governance to work first.” By far the most intriguing proposal has come from the Fordham law professor Chinmayi Sharma, who suggests that the way to ensure both the safety of AI and the accountability of its creators is to establish a professional licensing regime for engineers that would function in a similar way to medical licenses, malpractice suits, and the Hippocratic oath in medicine. “What if, like doctors,” she asks in the Washington University Law Review, “AI engineers also vowed to do no harm?”Sharma’s concept, were it to be adopted, would overcome the obvious obstacles currently stymieing effective governance: it bypasses the tech companies, it does not require a new government bureaucracy, and it is nimble. It would accomplish this, she writes,
by establishing academic requirements at accredited universities; creating mandatory licenses to “practice” commercial AI engineering; erecting independent organizations that establish and update codes of conduct and technical practice guidelines; imposing penalties, suspensions or license revocations for failure to comply with codes of conduct and practice guidelines; and applying a customary standard of care, also known as a malpractice standard, to individual engineering decisions in a court of law.
Professionalization, she adds, quoting the network intelligence analyst Angela Horneman, “would force engineers to treat ethics ‘as both a software design consideration and a policy concern.’”Sharma’s proposal, though unconventional, is no more or less aspirational than Marcus’s call for grassroots action to curb the excesses of Big Tech or Harding’s hope for an international, inclusive, community-run, nonbinding regulatory group. Were any of these to come to fruition, they would be likely targets of a Republican administration and its tech industry funders, whose ultimate goal, it seems, is a post-democracy world where they decide what’s best for the rest of us.5 The danger of allowing them to set the terms of AI development now is that they will amass so much money and so much power that this will happen by default.

Expedia shares jump on report that Uber held talks for takeover bid of travel booking giant

Expedia shares soared Thursday on news that Uber discussed a takeover bid for the travel booking giant, according to reports.

Expedia shares rose as much as 8.4% Wednesday evening after the Financial Times first reported the potential bid. Expedia shares closed up nearly 5% at $158.01 on Thursday.

The talks were in early stages and it is unclear whether an acquisition will take place since Uber has not formally approached Expedia, three people familiar with the process told the Financial Times.

Uber CEO Dara Khosrowshahi previously served as Expedia’s chief executive for more than a decade. REUTERS

Uber CEO Dara Khosrowshahi knows a thing or two about Expedia – the travel booking giant worth about $20 billion. He served as CEO of the booking group for more than a decade until 2017, when he took the helm at the rideshare company.

Uber has a market capitalization of around $168 billion after an 80.6% rally over the past year that boosted its stock price.

Explore More

The company bounced back after a tough 2022, when cash-strapped customers pulled back on spending amid inflationary prices.

It’s not the first challenge Khosrowshahi has led the company through. When he joined Uber in 2017, the company was in the midst of a nearly year-long crisis. It had suffered severe reputation damage after allegations of sexual harassment at the company arose and Google’s Waymo accused Uber of stealing trade secrets in a lawsuit.

Uber was also then struggling to turn a profit after seven years of losses.

Now, Uber is enjoying a boom in rideshare demand. Its Uber Eats business has also found success as customers turn to food delivery options en masse. 

The acquisition would expand Uber into a “super app” with the ability to offer travel booking options, according to Wedbush analyst Dan Ives. SOPA Images/LightRocket via Getty Images

Meanwhile, Expedia’s stock price is about 26% below its 2022 high. Though the company owns heavy-hitters like Hotels.com, Trivago and Travelocity, it still faces competition from industry big shots like Booking, Airbnb and Google.

Expedia’s own website allows customers to book flights, hotels, cars and tourism activities. In August, the Expedia Group reported $28.8 billion total gross bookings in its second quarter.

If Uber does acquire Expedia, it would be a “major strategic home run,” Wedbush analyst Dan Ives told CNBC. 

The rideshare app has been working to expand its offerings – into food delivery, but also train and flight bookings and other travel services.

The merger could fulfill Uber’s wish of becoming a “super app,” Ives said.

Uber declined to comment. Expedia did not immediately respond to requests for comment.

US sanctions Chinese entities for building, shipping Russian Garpiya drones used in Ukraine

washington —  The United States on Thursday announced fresh sanctions targeting Chinese and Russian entities for their role in designing, building and shipping attack drones that have resulted in mass casualties in Ukraine. The sanctions target two Chinese entities, Xiamen Limbach Aircraft Engine Co., Ltd., and Redlepus Vector Industry Shenzhen Co Ltd (Redlepus), Russian entity…

U.S. trade policy in question as election nears: new tariff hikes could threaten U.S. tech and force supply chain rethinks

With the U.S. presidential election approaching on 5 November, the contrasting trade positions of Democratic candidate Kamala Harris and Republican candidate Donald Trump are creating uncertainty for U.S. shippers, tech companies and global supply chains.
While neither candidate is a staunch proponent of free trade, Harris has signalled a cautious approach that includes maintaining some existing tariffs. In contrast, Trump’s more aggressive stance on tariffs could significantly alter trade dynamics, impacting U.S. industries, supply chains, and consumers alike, according to Stephen Olson, Visiting Fellow at the Institute of Southeast Asian Studies.

Speaking on the latest episode of The Freight Buyers’ Club Podcast, produced with support from Dimerco Express Group, he said: “She would continue maintaining the Trump-era tariffs… and has expressed a willingness to use strategic tariffs to support U.S. workers.”
However, he noted that Harris’s focus on bolstering green industries could affect different sectors in varying ways.
Trump, on the other hand, has declared that he would pursue extensive tariff hikes if elected, promising to protect American industries. Proposals range from a 10% universal import tariff to a staggering 500% tariff on Chinese EV imports.
Lobbyists, analysts and economists have argued such tariffs could disrupt trade and increase costs for U.S. consumers and businesses without necessarily bringing jobs back to the U.S. Olson said that for US exporters and importers, the stakes were high.
“Trump has articulated what would be the most protectionist U.S. trade policy in at least 100 years,” Olson added, pointing to the likelihood of retaliatory tariffs from key partners such as the EU and China. “There’s a real danger that we could be heading for a spiralling trade war along the lines of what we saw in the aftermath of the Smoot-Hawley tariffs during the 1930s,” he warned.
U.S. Tech impact
Speaking on The Freight Buyers’ Club, Ed Brzytwa, Vice President of International Trade at the Consumer Technology Association (CTA), warned that Trump’s proposed tariffs could have a drastic effect on the consumer technology sector.
“Tariffs are taxes on Americans, and they’re regressive,” he said. A CTA study released earlier this month found that Trump’s proposed tariffs would see “the purchasing power of U. S. consumers for consumer technology products decrease by $90 billion”, with potential price hikes of 46% on laptops and tablets and 26% on smartphones.
“Americans may say they’re willing to pay more for a product made in the United States, but if the price is too high, they’re not going to be able to purchase those products,” he added.
Brzytwa also emphasized the risk of retaliation, which he argued would “make the United States less competitive.”
With the tech industry facing significant challenges if tariffs increase, Brzytwa underscored the challenges of reshoring manufacturing to the U.S. “A US$500 billion investment and a tenfold increase in labour would be required over ten years to bring tech production back to the U.S,” he said. In his view, the costs are prohibitive, and the workforce simply isn’t there.
Supply chain upheaval
Olson also highlighted how the increasing complexity of dual-use technologies in everyday products, such as smart refrigerators and modern cars, is likely to result in ever more stringent trade barriers. “An expanding portion of the products we trade contain dual-use technologies, so the level of restrictions, barriers, and additional permissions you’re going to have to get are only going to increase.”
He urged logistics and supply chain professionals to prepare for a challenging road ahead, with a more fragmented trading landscape inevitably resulting in more complex supply chains.
“My strong advice would be to go into your boss’s office and request a pay raise because your job is going to get a lot harder,” he said.

[embedded content]

‘Anora’ thinks she’s found her Prince Charming. This 5-star movie is no fairy tale.

“Anora,” written and directed by Sean Baker, is a startlingly empathetic film about an exotic dancer in a New York “gentlemen’s club.” The reason it’s startling is that we’re used to seeing sexually explicit material like this sensationalized onscreen. But Baker is a humanist – there is nothing exploitative about what he does here. He’s after deeper emotional truths. Perhaps this is why “Anora” has been internationally acclaimed. It won the Palme d’Or at Cannes, that festival’s highest honor, and rightly so.When we first encounter Anora (Mikey Madison), or Ani, as she wants to be called, she is plying her trade while also keeping a sharp eye on the men’s wallets and the clock. Because she speaks Russian – courtesy of her grandmother, who never learned English – she is put together with a new club member, Ivan (Mark Eydelshteyn), a goofball 21-year-old who, she finds out, is the son of a billionaire Russian oligarch. “You work in a cool place,” he tells her. Soon, as his private dancer, she is working in an even cooler place – the oceanside manse he alone occupies while his parents are in Russia.A spoiled scamp, Ivan seems younger than 21 and barely speaks English. When he asks Ani her age, she tells him, probably truthfully, that she’s 23. He half-seriously replies, “You act like you’re 25.”
Why We Wrote This
People who live on society’s margins aren’t always treated with compassion and sympathy. But the director of “Anora” offers both. “I’ve rarely encountered a scene that moved me as completely and complicatedly as this film’s final moments,” says the Monitor’s critic.
Ani tells Ivan he is funny – as in, ha-ha funny – and we sense that, unlike most of the blandishments she hands out to customers, this is a compliment she means. Despite her street smarts, she’s both flummoxed and flattered by this guy. When he impulsively asks her to marry him, she accepts the offer, warily at first, and then wholeheartedly.It’s a madcap Cinderella fantasy that, of course, is bound to collapse when Ivan’s parents find out. Enraged and en route to New York, they assign a pair of trusted local fixers, the Armenian Toros (Karren Karagulian) and his burly comrade-in-arms Garnick (Vache Tovmasyan), along with Igor (Yura Borisov), a thuggish-looking Russian, to annul the marriage.Not so easy. Screaming at the top of her lungs and biting her captors, Ani insists she is Ivan’s rightful wife. Ivan, meanwhile, without Ani, has fled the scene. Much of the remainder of the film – which also deftly paints a fully lived-in portrait of the Brighton Beach Russian community – is about how this gaggle of misfits track him down.“Anora” seamlessly interweaves a full range of tones, from the comic to the tragicomic. The entire cast is altogether extraordinary. Much of the film, especially once the fixers arrive, plays almost like slapstick. And yet, even at its giddiest, which also includes a jaunt to a quickie wedding chapel in Las Vegas, Baker never once loses sight of the humanity of these people.This sensibility has always been a hallmark of Baker’s films, most notably “The Florida Project,” which told the story of children living in a rundown motel in the shadow of Walt Disney World. He has a feeling for lives lived on the margins, and what one must do to survive. Ani is such a powerful creation because Baker and Madison, without stooping to sentimentality, understand the character’s pathos: Given a glimpse of a glittering new life, she desperately wants to save it. She is not being mercenary, not anymore. She genuinely wants to be happy. She wants to be a wife. It is her pride.The irony of “Anora” is that Ani is seeking normalcy in a world ill-fitted to meet her desires. When the fixers, and then Ivan’s parents, call her out as a prostitute, she explodes. She feels betrayed because that is not how she sees herself. We don’t see her that way, either.

And neither, it turns out, does Igor, who, with his bald pate and hoodie, appears so quietly menacing. Like everybody else in “Anora,” he is not what we initially take him to be. This refusal to stigmatize characters is the hallmark of Baker’s art. The scenes between Ani and Igor, which develop from contempt on her part to something far more emotionally layered, are the compassionate core of this film. I’ve rarely encountered a scene that moved me as completely and complicatedly as this film’s final moments. Baker isn’t merely demonstrating a sympathy for these people. He is expressing a profound sympathy for the bewildering convolutions of the human condition.Peter Rainer is the Monitor’s film critic. “Anora” is rated R for strong sexual content throughout, graphic nudity, pervasive language, and drug use. It is in English, Russian, and Armenian, with English subtitles.

Bruce Willis Predicted One Of The Biggest Horror Movies Of All Time On The Set Of Pulp Fiction

Miramax

What does “Pulp Fiction” have to do with low-budget horror movies? Well, the film’s $8 million budget might seem like a fairly significant amount of money, but it’s not the kind of figure you’d expect for a film starring Samuel L. Jackson, John Travolta, and Uma Thurman. Of course, at the time those actors weren’t the bonafide megastars they’d become, or, in Travolta’s case, king of the 0% Rotten Tomatoes score (sorry John, you’re a legend). Still, $8 million is more the kind of budget you’d expect for a mid-range horror movie, so it’s impressive Quentin Tarantino managed to make $212 million from his off-beat postmodern pastiche.

That’s not the only connection between “Pulp Fiction” and horror, either. Aside from the film’s often striking depictions of violence, there’s a surprising link between one of its stars and an ultra low-budget horror movie that debuted the same decade as Tarantino’s film.
Bruce Willis made “Pulp Fiction” profitable before shooting even began, simply by agreeing to appear in the film and bringing his well-established movie star gravitas to the film and not demanding a huge payday for the effort. The actor didn’t have a big part in Tarantino’s celebrated opus, but as boxer Butch Coolidge, Willis reminded audiences that he more than deserved his star status following a few box office failures. As it turns out, the “Die Hard” star also made a prediction on the set of “Pulp Fiction” that proved to be so oddly accurate it’s almost as disturbing as any of the violence therein.

Bruce Willis successfully predicts a modern horror classic

Artisan Entertainment

Posted by filmmaker Scott Derrickson on Twitter/X, a clip shows Bruce Willis speaking to Quentin Tarantino, who’s holding the camera, on the set of “Pulp Fiction” and making some eerily accurate predictions about the future of filmmaking. “Some day in the next five years,” he says, “Someone’s gonna take one of these [camcorders] and make a feature film with it.” The actor went on to lay out a scenario whereby low-budget, lo-fi filmmaking becomes the norm:

“Some kid, some 17-year-old kid, is gonna make this killer, drop-dead, poorly-lit video movie that is gonna be the hippest f***ing thing. And then there’s gonna be hundreds of them everywhere. And they’re gonna cost about $60,000.”

Now, “Pulp Fiction” was released in 1994 and five years later, in 1999, we saw the release of the movie Willis apparently foresaw in a vision: “The Blair Witch Project.” This legendary low-budget horror film, shot on a Hi8 camcorder, kick-started the found footage trend and became a cultural sensation thanks in part to a genius marketing strategy that promoted the movie as if it were actual found footage of real events.
Admittedly, directors Daniel Myrick and Eduardo Sánchez were not 17-year-old kids when they made “Blair Witch,” and did, in fact, shoot the film during their mid-30s. But that’s about the only thing Willis got wrong in his prediction. Five year timespan? Check. “Poorly-lit video movie?” Check. (Just take a look at the infamous final shot of “Blair Witch.”) And guess how much “The Blair Witch Project” was made for? $60,000. Or, at least between $35,000 and $60,000.

Bruce Willis’ prediction was a little too accurate

Miramax

After “The Blair Witch Project” was picked up for distribution, the final budget grew considerably prior to the film’s premiere at the 1999 Sundance Film Festival. Reportedly, a sound remix and 35mm transfer saw the final budget grow to around $600,000. But with a global box office take of $248 million, that increase from $60,000 to $600,000 turned out to be negligible. What’s important here, though, is not what distributors Artisan Entertainment made from “The Blair Witch” or that the movie became one of the most successful independent films of all time, but that Bruce Willis apparently possesses supernatural abilities of his own.

Willis’ comments are especially impressive considering his claim that the first low-budget camcorder film would spawn “hundreds” like it, which is exactly what has happened in the wake of “Blair Witch.” Found footage is now, itself, a well-established genre, with numerous films taking their cues from Daniel Myrick and Eduardo Sánchez’s modest production. It’s a lineage that includes such celebrated efforts as Oren Peli’s 2007 horror flick “Paranormal Activity” and Matt Reeves’ 2008 monster horror feature “Cloverfield” — two of the scariest found footage horror movies ever made. Of course, you could make the point that this genre dates all the way back to films like 1980’s “Cannibal Holocaust” or even earlier than that, but “Blair Witch” was the first time such a low-budget film that embraced the lo-fi nature of its filmmaking equipment received such widespread acclaim and attention.

Even today, you can see the influence of “Blair Witch.” Take something like “Skinamarink,” one of the scariest horror films of 2023. Kyle Edward Ball’s film was made on a budget of just $15,000 and shot in the director’s childhood home. With its embrace of lo-fi aesthetics and its modest production, it feels very much part of the legacy of “Blair Witch” and shows that Willis’ prediction rings true all the way to our present day.
Not that Quentin Tarantino has struggled in any sense since he made “Pulp Fiction,” then, but when Willis tells the director at the beginning of the clip in question, “You should be the first guy to do this,” Tarantino might have done well to have listened.