The move will make the Pixel 7 and 7 Pro the first smartphones to include a free VPN connection.
Published: 16 Jun 2022
The business-enabling category of privacy-enhancing technologies (PETs) is making its mark as the one of the foundational technologies of the digital transformation era. With data as the backbone of the digital economy, market factors such as the drive to view data as an organisational asset, the need for global data sharing and collaboration, and an ever-increasing demand for privacy have catapulted this family of technologies into the spotlight.
The direct impact that they deliver to business and mission capabilities is the reason they are here to stay. Grouped together for their ability to enable, enhance and preserve the privacy of data throughout its lifecycle, these technologies are powerful and transformational in engineering trust and unlocking data value. As the value of PETs becomes more widely recognised and an increasing number of organisations and providers begin using the term, there remain a variety of questions and misconceptions to address.
Front and centre among items to clarify is the name itself. While there have been a number of proposed labels – privacy-preserving technologies, privacy-enhanced computation, privacy enablers – PETs has surfaced as the category name of choice. This is the designation that US and UK leaders used late last year when they announced an initiative to advance these technologies to “harness the power of data in a manner that protects privacy and intellectual property, enabling cross-border and cross-sector collaboration to solve shared challenges”.
Also, there are misconceptions about which technologies should be included in the category. Clarity on this item centres around how the technologies are used, the role they play in enhancing privacy, and where that impact takes place. As a mathematician by training, to me, the simplest dividing line comes down to computation.
PETs protect data in use, which describes data while it is being used or processed via searches or analytics. By this definition, the core pillars of PETs include homomorphic encryption, secure multi-party computation, and trusted execution environments (sometimes called confidential compute). Other approaches, such as synthetic data and data at rest protection mechanisms such as tokenisation, are tangential, but do not fit within the category because they do not protect data in use.
To continue building a shared understanding of this increasingly visible, transformational family of technologies, let’s address some common myths and misconceptions about PETs.
The PETs category includes technologies that protect, preserve and enhance data throughout its processing lifecycle – technologies that have been studied deeply for decades. Homomorphic encryption (HE), for example, became broadly recognised thanks to research published by Craig Gentry in 2009. The timing of the story is similar for secure multi-party computation (SMPC) and trusted execution environments (TEEs). What has changed more recently is the practicality of their broad use at scale.
Breakthroughs, largely driven by market need and motivation, have firmly taken these technologies from the realm of research to commercial readiness. Although the progress made in recent years is impressive, industry experts agree that we have only begun to scratch the surface. Gartner predicts that, by 2025, half of all large organisations will be using these capabilities for processing data in untrusted environments and multi-party data analytics use cases. These advances are being driven by a growing ecosystem of venture capital-backed startups, well-funded research components of global organisations, and academia.
There are a number of great examples of PETs being implemented at scale today for use cases in financial services, healthcare and government. PETs are enabling cross-jurisdictional data sharing for know-your-customer screenings and fraud investigations. They are enabling organisations to privately leverage third-party data assets without pooling or replicating data. They are facilitating more accurate risk assessment modelling by expanding the number of accessible data sources. They are protecting sensitive indicators and speeding time to value for applications at the processing edge.
In short, PETs are making entirely new things possible across a growing number of industries by overcoming regulatory, organisational, security and national boundaries to accommodate secure data usage and collaboration in ways that are not otherwise possible.
The power of PETs lies in their ability to protect data while it is being used or processed – when searches, analytics and machine learning models are being run over data to extract value. This is different from, and complementary to, other traditional measures that protect data at rest, such as in the file system or database, or data in transit as it moves through the network.
While there are many effective, established solutions for protecting data at rest and data in transit, if organisations want to be able to safely and privately extract value from data assets, these traditional protection strategies are not sufficient. Also, PETs do not replace existing solutions protecting data at rest and in transit; they work alongside them to protect the final segment of the data triad, data in use.
In an emerging category like PETs, there is a tendency to pit technologies against each other to evaluate which technology reigns supreme. The reality is that these technologies each offer unique attributes and choosing the right ones depends entirely on the use case requirements, infrastructure, and the desired level and type of protection. PETs can, and often do, work together.
For example, organisations can use an SMPC capability that leverages HE, and vice versa. Or SMPC and HE techniques can be leveraged in conjunction with a TEE. Organisations looking to utilise PETs should explore all the options available and educate themselves to determine the best fit.
Commercial PETs companies, regulatory bodies, industry consortiums, market analysts, researchers and other third-party groups have a role to play in these efforts to build awareness and enhance understanding. Likewise, those working in the PETs space need to recognise and embrace the role we play in educating the market, in helping differentiate the technologies and explaining their often-complementary nature, and do so in a way that acknowledges that the adoption of any and all PETs will best serve to address global privacy challenges.
PETs have a long and rich research history and, as such, many PETs are part of an active ecosystem that includes open source research libraries and algorithms. While it is fantastic to have a research foundation upon which to build, it is also important to remember that these elements are not ready-to-use commercial offerings. For example, HE libraries provide basic cryptographic components, but organisations leveraging them must dedicate engineering, algorithmic and integration resources in order to mature the basic building blocks into viable, enterprise-grade solutions.
Likewise, SMPC libraries offer basic algorithms and TEEs are built into many chips and cloud environments today, but there is much work and deep expertise required to take these fundamental elements and build practical, commercial offerings to protect data in use at scale. That is the value that commercial PETs software providers bring to the table – deep PETs knowledge and off-the-shelf capabilities that are ready to deploy and use today to solve real problems.
The open source research landscape is an awesome tool for advancing innovative technologies and the PETs category has certainly benefited from the efforts of numerous contributors. But these PETs research efforts are just the beginning of the story. Commercial solutions advance and give these research efforts the “wings” required to add real, measurable value.
The time for privacy-enhancing technologies is here. The technologies are ready, the market is ready, and the list of data usage problems demanding secure and private solutions continues to grow. Stephen Almond, director of technology and innovation at the ICO, recently summarised the value this innovative category delivers to the broader market: “Privacy-enhancing technologies help organisations build trust and unlock the potential of data by putting data protection by design into practice.”
We are undeniably in an era of digital transformation, and to ensure we continue forward on a foundation prioritising data privacy and security, we should embrace PETs now. That effort starts with shared understanding.
Ellison Anne Williams is CEO and founder of Enveil
The move will make the Pixel 7 and 7 Pro the first smartphones to include a free VPN connection.
After months of waiting and delays, Valve has finally announced that the Steam Deck dock is available for purchase on its official site.
Not only that but, according to Valve, the dock will ship out in an incredibly fast one to two weeks, which pairs with the fact that the Steam Deck itself is now shipping with no wait time (not to mention that it’s incredibly easy to set up). The port selection is pretty solid as well, with the dock featuring three USB-A 3.1 gen 1 ports, one Ethernet port, a DisplayPort 1.4, and an HDMI 2.0 port. And for its power supply, it uses a USB-C passthrough delivery.
A Steam Deck dock will run you $90 (around £81 / AU$140), which is a bit steeper than most third-party options on the market right now. But for those waiting it out for an official product until now, price most likely will not be an issue.
Considering that even Steam Decks themselves are shipping without a queue and that the dock has such a quick turnaround to delivery, it seems that the supply chain issues that had been gripping Valve are loosening considerably.
However, the deck itself is far from perfect. Because of the fact that it uses USB-C for the display port, a third-party USB-C dock that uses its own power supply and video out will output the display of the official dock.
And as mentioned before, the price of the official Steam Deck dock is steeper than many third-party options on the market, meaning that those who are on a budget might pass this product up in favor of a lower-priced one.
There are also some bugs that Valve is working on fixing at this time, including one involving compatibility with LG displays. According to the FAQ, if the “Docking Station is connected via HDMI, sleep/wake can result in visual noise.”
It might be worth waiting for Valve to work out the kinks of its dock before investing in one. And while you’re waiting, research other options that might better suit your needs.
set -e was an attempt to add “automatic error detection” to the shell. Its goal was to cause the shell to abort any time an error occurred, so you don’t have to put || exit 1 after each important command. This does not work well in practice.
The goal of automatic error detection is a noble one, but it requires the ability to tell when an error actually occurred. In modern high-level languages, most tasks are performed by using the language’s builtin commands or features. The language knows whether (for example) you tried to divide by zero, or open a file that you can’t open, and so on. It can take action based on this knowledge.
But in the shell, most of the tasks you actually care about are done by external programs. The shell can’t tell whether an external program encountered something that it considers an error — and even if it could, it wouldn’t know whether the error is an important one, worthy of aborting the entire program, or whether it should carry on.
The only information conveyed to the shell by the external program is an exit status — by convention, 0 for success, and non-zero for “some kind of error”. The developers of the original Bourne shell decided that they would create a feature that would allow the shell to check the exit status of every command that it runs, and abort if one of them returns non-zero. Thus, set -e was born.
But many commands return non-zero even when there wasn’t an error. For example,
if [ -d /foo ]; then ...; else ...; fi
If the directory doesn’t exist, the [ command returns non-zero. Clearly we don’t want to abort when that happens — our script wants to handle that in the else part. So the shell implementors made a bunch of special rules, like “commands that are part of an if test are immune”, and “commands in a pipeline, other than the last one, are immune”.
These rules are extremely convoluted, and they still fail to catch even some remarkably simple cases. Even worse, the rules change from one Bash version to another, as Bash attempts to track the extremely slippery POSIX definition of this “feature”. When a SubShell is involved, it gets worse still — the behavior changes depending on whether Bash is invoked in POSIX mode. Another wiki has a page that covers this in more detail. Be sure to check the caveats.
Consider this allegory, originally posted to bug-bash:
Once upon a time, a man with a dirty lab coat and long, uncombed hair showed up at the town police station, demanding to see the chief of police. "I've done it!" he exclaimed. "I've built the perfect criminal-catching robot!" The police chief was skeptical, but decided that it might be worth the time to see what the man had invented. Also, he secretly thought, it might be a somewhat unwise move to completely alienate the mad scientist and his army of hunter robots. So, the man explained to the police chief how his invention could tell the difference between a criminal and law-abiding citizen using a series of heuristics. "It's especially good at spotting recently escaped prisoners!" he said. "Guaranteed non-lethal restraints!" Frowning and increasingly skeptical, the police chief nevertheless allowed the man to demonstrate one robot for a week. They decided that the robot should patrol around the jail. Sure enough, there was a jailbreak a few days later, and an inmate digging up through the ground outside of the prison facility was grabbed by the robot and carried back inside the prison. The surprised police chief allowed the robot to patrol a wider area. The next day, the chief received an angry call from the zookeeper. It seems the robot had cut through the bars of one of the animal cages, grabbed the animal, and delivered it to the prison. The chief confronted the robot's inventor, who asked what animal it was. "A zebra," replied the police chief. The man slapped his head and exclaimed, "Curses! It was fooled by the black and white stripes! I shall have to recalibrate!" And so the man set about rewriting the robot's code. Black and white stripes would indicate an escaped inmate UNLESS the inmate had more than two legs. Then it should be left alone. The robot was redeployed with the updated code, and seemed to be operating well enough for a few days. Then on Saturday, a mob of children in soccer clothing, followed by their parents, descended on the police station. After the chaos subsided, the chief was told that the robot had absconded with the referee right in the middle of a soccer game. Scowling, the chief reported this to the scientist, who performed a second calibration. Black and white stripes would indicate an escaped inmate UNLESS the inmate had more than two legs OR had a whistle on a necklace. Despite the second calibration, the police chief declared that the robot would no longer be allowed to operate in his town. However, the news of the robot had spread, and requests from many larger cities were pouring in. The inventor made dozens more robots, and shipped them off to eager police stations around the nation. Every time a robot grabbed something that wasn't an escaped inmate, the scientist was consulted, and the robot was recalibrated. Unfortunately, the inventor was just one man, and he didn't have the time or the resources to recalibrate EVERY robot whenever one of them went awry. The robot in Shangri-La was recalibrated not to grab a grave-digger working on a cold winter night while wearing a ski mask, and the robot in Xanadu was recalibrated not to capture a black and white television set that showed a movie about a prison break, and so on. But the robot in Xanadu would still grab grave-diggers with ski masks (which it turns out was not common due to Xanadu's warmer climate), and the robot in Shangri-La was still a menace to old televisions (of which there were very few, the people of Shangri-La being on the average more wealthy than those of Xanadu). So, after a few years, there were different revisions of the criminal-catching robot in most of the major cities. In some places, a clever criminal could avoid capture by wearing a whistle on a string around the neck. In others, one would be well-advised not to wear orange clothing in certain rural areas, no matter how close to the Harvest Festival it was, unless one also wore the traditional black triangular eye-paint of the Pumpkin King. Many people thought, "This is lunacy!" But others thought the robots did more good than harm, all things considered, and so in some places the robots are used, while in other places they are shunned. The end.
Or, “so you think set -e is OK, huh?”
Exercise 1: why doesn’t this example print anything?
Exercise 2: why does this one sometimes appear to work? In which versions of bash does it work, and in which versions does it fail?
Exercise 3: why aren’t these two scripts identical?
Exercise 4: why aren’t these two scripts identical?
Exercise 5: under what conditions will this fail?
Even if you use expr(1) (which we do not recommend — use arithmetic expressions instead), you still run into the same problem:
Subshells from command substitution unset set -e, however (unless inherit_errexit is set with Bash 4.4):
Note that set -e is not unset for commands that are run asynchronously, for example with process substitution:
Another pitfall associated with set -e occurs when you use commands that look like assignments but aren’t, such as export, declare, typeset or local.
In function f, the exit status of somecommand is discarded. It won’t trigger the set -e because the exit status of local masks it (the assignment to the variable succeeds, so local returns status 0). In function g, the set -e is triggered because it uses a real assignment which returns the exit status of somecommand.
A particularly dangerous pitfall with set -e is combining functions with conditionals. The following snippets will not behave the same way:
As soon as a function is used as a conditional (in a list or with a conditional test or loop) set -e stops being applied within the function. This may not only cause code to unexpectedly start executing in the function but also change its return status!
Using Process substitution, the exit code is also discarded as it is not visible from the main script:
Using a pipe makes no difference, as only the rightmost process is considered:
set -o pipefail is a workaround by returning the exit code of the first failed process:
though with pipefail in effect, code like this will sometimes cause an error, depending on whether the output of somecmd exceeds the size of the pipe buffer or not:
In the mid 2010s, some people decided that the combination of set -e, set -u and set -o pipefail should be used by default in all new shell scripts. They call this unofficial bash strict mode, and they claim that it “makes many classes of subtle bugs impossible” and that if you follow this policy, you will “spend much less time debugging, and also avoid having unexpected complications in production”.
As we’ve already seen in the exercises above, these claims are dubious at best. The behavior of set -e is quite unpredictable. If you choose to use it, you will have to be hyper-aware of all the false positives that can cause it to trigger, and work around them by “marking” every line that’s allowed to fail with something like ||true.
GreyCat‘s personal recommendation is simple: don’t use set -e. Add your own error checking instead.
rking’s personal recommendation is to go ahead and use set -e, but beware of possible gotchas. It has useful semantics, so to exclude it from the toolbox is to give into FUD.
geirha’s personal recommendation is to handle errors properly and not rely on the unreliable set -e.
Intel Raptor Lake CPU model names and specifications revealed in their entirety by new leak
Amazon teases 30 free games and a $10 credit for Prime Day 2022
California Is Poised to Ban Prostitution-Related Loitering Arrests
What is masked mail? This new twist on an old practice can supercharge your security
Polygon and NFTically team up to launch an ecommerce metaverse
Get a free issue of PCWorld’s digital magazine
Final Fantasy VII Rebirth continues Remake’s story next winter
Advocacy group asks Meta to add Facebook relationship options for non-monogamists