Connect with us






*)rr features

  • Low overhead compared to other similar tools, especially on mostly-single-threaded workloads
  • Supports recording and replaying all types of applications: Firefox Chrome, QEMU. LibreOffice.
    Go programs,
  • Record and replay multiple-process workloads including whole containers.
  • Works using gdb scripting IDE Integration
  • Durable,
    compact traces that
    can be ported between machines
  • Chaos mode to
    Make intermittent bugs reproducible

the rr debugging experience

To begin, use rr to register your application

$ rr record /your/application --args
FAIL: oh no! 

The entire execution was saved to disk, even the failure.
This recording can now been debugged.

$ rr replay
GNU gdb (GDB) ...
0x4cee2050 in _start () from /lib/

Remember, you’re debugging the recorded trace
deterministically; not a live, nondeterministic
execution. Register the address spaces of replayed execution
Every run has the exact same contents, syscall and data.

You can use most of the gdb commands.

(gdb) break mozilla::dom::HTMLMediaElement::HTMLMediaElement
(gdb). Continue
Breakpoint 1, mozilla::dom::HTMLMediaElement::HTMLMediaElement (this=0x61362f70, aNodeInfo=...)

For example, if you want to restart debugging sessions
Because you missed breaking at a critical execution point
problem. Just use gdb’s run command to restart

(gdb) run
The program that is being debugged was already started.
It's best to start it at the beginning. (y or n), y
Breakpoint 1, mozilla::dom::HTMLMediaElement::HTMLMediaElement (this=0x61362f70, aNodeInfo=...)

The run command started another replay run of your
Recording from the beginning After the session has been restarted, however,
the same execution was replayed again. All your
The debugging state was maintained across the restart.

Note that the this pointer of the
dynamically-allocated object was the same in both replay
sessions. Each session uses the exact same memory allocations
You can record replays, which means you can hard-code the addresses you wish to view.

Reverse execution is even more powerful. Suppose we’re debugging Firefox layout:

Breakpoint 1, nsCanvasFrame::BuildDisplayList (this=0x2aaadd7dbeb0, aBuilder=0x7fffffffaaa0, aDirtyRect=..., aLists=...)
    at /home/roc/mozilla-inbound/layout/generic/nsCanvasFrame.cpp: 460
460   if (GetPrevInFlow()) {
(gdp.p mRect.width

We know this value is incorrect. We want to know where it was placed.
This is possible with rr.

(gdb) watch -l mRect.width
(gdb) reverse-cont
Hardware watchpoint 2 -location mRect.width
Old value = 12000
New value = 11220
0x00002aaab100c0fd in nsIFrame::SetRect (this=0x2aaadd7dbeb0, aRect=...)
    at /home/roc/mozilla-inbound/layout/base/../generic/nsIFrame.h: 718
718       mRect = aRect;

Combining hardware data watchpoints and reverse execution can be extremely powerful!


This video shows you how to quickly record and replay Firefox.

This video shows rr’s basic capabilities in more detail.

Robert O’Callahan gives a technical talk about rr.

getting started

Build from source

These are the instructions.
Recommendation if you are having trouble with the packages — OS updates and kernel changes sometimes require rr modifications.

Or in Fedora:

cd /tmp
wget$(uname -m).rpm
sudo dnf install rr-5.5.0-Linux-$(uname -m).rpm

Or in Ubuntu:

cd /tmp
wget$(uname -m).deb
sudo dpkg -i rr-5.5.0-Linux-$(uname -m).deb

background and motivation

Original motivation for rr was to debug intermittent failures
easier. These errors are difficult to debug as any program can be run.
Failures may not be visible. We created a tool to record the failures.
Program executions with low overhead can be recorded so that you can test them
You can’t fail until you see it, then try again.
Repeat the process until you understand it completely.

We also hoped that deterministic replication would allow us to debug any bug we find.
easier. Normal debuggers can store the information that you have learned during debugging
session (e.g. The addresses of the objects of interest and the order of important documents
Events) are often made obsolete by the need to run the testcase again.
You can make it happen with deterministic replay. Your knowledge is the key.
The failure rate increases monotonically.

Debugging, on the other hand, is the process to trace effects.
It’s easier to identify their causes if your debugger can perform backwards in the time.
It is well-known that
Given a record/replay system that allows for replay at a restart,
You can reverse execute to a specific point in time using restoring
The previous checkpoint, and moving forwards to the desired destination. We hoped!
It would be great if we could build a low-overhead record and replay system that works well on the
We could create a backend that is really useful for applications we care about (Firefox).
Reverse execution commands of gdb

These goals have all been met. rr is not only a working
It’s a useful tool that is being used frequently by developers in a variety of small and large projects.

rr records all Linux user-space processes, and captures them all
These inputs are from the kernel and any nondeterministic CPU
These processes have very few effects.
Execution of rr replay is guaranteed to preserve instruction-level control flow
Register contents and memory.
The memory layout remains the same as the addresses of objects.
Register values don’t change, syscalls return exactly the same register values
same data, etc.

Even more tools like fuzzers or randomized fault injectors are available
More powerful when combined with rr. These tools are extremely powerful.
triggering some intermittent failure, but it’s often
It is difficult to reproduce the same error again in order to debug it.
The random execution of a program can be simply recorded by using rr If
If execution fails, the saved recording can then be used.
Debug the problem deterministically

Rr reduces the cost of fixing bugs. rr aids in the production
Software of higher quality at a lower price. Also, rr makes
More fun with debugging

rr in context

Many systems use record-and-replay for debugging.
preceding rr. What makes rr unique are the
design goals:

  • Initial focus on Firefox. Many record
    Replay techniques and programming languages are required
    Scale poorly and can’t use Firefox — or were just
    These were experimental and never fully developed. Firefox is
    Complex applications require debugging.
    Firefox is most likely to be useful.
  • Deployability. Stock rr
    Linux kernels are available on commodity hardware and require no additional software
    system configuration changes. Many replay and record techniques
    require kernel changes. Many people rely on the OS running in a virtual environment.
  • Low run-time overload. We need rr replaced
    You can use gdb to automate your workflow. This means that you must start getting.
    Results with rr will be returned as fast as possible – about the same as if you were there
    using gdb. A lower overhead means that tests are less perturbed.
  • Simplicity of design. We didn’t have many resources
    To develop rr we avoided complex approaches
    Such as dynamic binary instrumentation. This simplicity has also helped to make it simple.
    rr is stronger and has a lower overhead.

The workload of your application will determine the overhead of rr. On
Firefox test suites, the rr recording performance is very usable.
Slowdowns as low as 1.2x are common. If there is a 1.2x slowdown, it means that the market has experienced a halt of = 1.2x.
the suite takes 10 minutes to run by itself, it will take around
12 minutes to be recorded by rr. Overhead can be very different.
Depending on the work load. For mostly-single-threaded programs, rr has
We know of no comparable record-and-replay systems that have overheads as low as ours.


rr …

  • emulates a single-core computer. So, parallel programs incur
    The slowdown in running on one core. This is an inherent problem.
    Design feature
  • cannot register processes that share memory
    Outside the recording tree. This is an inherent characteristic of the
    design. rr disables features like X shared automatically
    This problem can be avoided by using memory for recorded processes.
  • requires a reasonably modern x86 CPU. It all depends on certain factors.
    Performance counter features that aren’t available in older versions
    requires a knowledge of each system call executed by
    Recorded processes It supports many different types of processes.
    syscalls – those required by Firefox and other application people
    I have dealt with rr — but support
    It’s not complete so you may want to run rr on your application
    Find a syscall which must be implemented. Please
    file github issues
    Unsupported system calls
  • sometimes needs to be updated to reflect kernel changes.
    Updates to system libraries or new CPU families If rr doesn’t work
    Please refer to the following information for you (and all caveats are not applicable).
    file an issue.

further reference

The Extended Technical Report
This is our best overview of rr’s performance and how it works.

The rr wiki
This site contains pages on technical topics related to rr.

Ask on the mailing
or on #rr on if you have questions about rr.

Read More

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.


Google Pixel 7 and 7 Pro are getting a built-in VPN at no extra cost




Google Pixel 7 and 7 Pro are getting a built-in VPN at no extra cost
Google Pixel 7 Pro hands on front Snow

(Image credit: Future / Lance Ulanoff)

Users of the Google Pixel 7 and 7 Pro devices will be able to secure their data without the need to pay for an additional Android VPN after the company said it would be including its Google One VPN service at no extra cost. 

The move will make the Pixel 7 and 7 Pro the first smartphones to include a free VPN connection. 

The offer is restricted to just some countries, though – and what’s more, some data won’t be secured inside the VPN tunnel.  

Peace of mind when you connect online ✨Later this year, #Pixel7 and 7 Pro will be the only phones with a VPN by Google One—at no extra cost.¹#MadeByGoogle¹See image for more info 6, 2022

See more

Google Pixel 7 VPN

Despite the aforementioned limits, the big tech giant assures that the VPN software won’t associate users’ app and browsing data with users’ accounts. 

Google One VPN typically costs around $10 per month as part of the company’s Premium One plan, which also comes with a 2TB of cloud storage on top. 

This decision is the latest move to bring Google’s mobile data security to the next level. Not too long ago, the company made Google One VPN available also for iOS devices, and also introduced the option of having an always-on VPN across its latest smartphones. 

Google promises that its secure VPN software will shield your phone against hackers on unsecure networks, like public Wi-Fi. It will also hide your IP address so that third parties won’t be able to track your location.

Shorter for virtual private network, a VPN is exactly the tool you want to shield your sensitive data as it masks your real location and encrypts all your data in transit. Beside privacy, it can allow you to bypass geo-restrictions and other online blocks. 

Chiara is a multimedia journalist, with a special eye for latest trends and issues in cybersecurity. She is a Staff Writer at Future with a focus on VPNs. She mainly writes news and features about data privacy, online censorship and digital rights for TechRadar, Tom’s Guide and T3. With a passion for digital storytelling in all its forms, she also loves photography, video making and podcasting. Originally from Milan in Italy, she is now based in Bristol, UK, since 2018.

Read More

Continue Reading


The Steam Deck dock is finally here and will ship faster than you think




The Steam Deck dock is finally here and will ship faster than you think
a steam deck placed in a steam deck dock

(Image credit: Valve)

After months of waiting and delays, Valve has finally announced that the Steam Deck dock is available for purchase on its official site.

Not only that but, according to Valve, the dock will ship out in an incredibly fast one to two weeks, which pairs with the fact that the Steam Deck itself is now shipping with no wait time (not to mention that it’s incredibly easy to set up). The port selection is pretty solid as well, with the dock featuring three USB-A 3.1 gen 1 ports, one Ethernet port, a DisplayPort 1.4, and an HDMI 2.0 port. And for its power supply, it uses a USB-C passthrough delivery.

A Steam Deck dock will run you $90 (around £81 / AU$140), which is a bit steeper than most third-party options on the market right now. But for those waiting it out for an official product until now, price most likely will not be an issue.

Is it worth buying? 

Considering that even Steam Decks themselves are shipping without a queue and that the dock has such a quick turnaround to delivery, it seems that the supply chain issues that had been gripping Valve are loosening considerably.

However, the deck itself is far from perfect. Because of the fact that it uses USB-C for the display port, a third-party USB-C dock that uses its own power supply and video out will output the display of the official dock. 

And as mentioned before, the price of the official Steam Deck dock is steeper than many third-party options on the market, meaning that those who are on a budget might pass this product up in favor of a lower-priced one.

There are also some bugs that Valve is working on fixing at this time, including one involving compatibility with LG displays. According to the FAQ, if the “Docking Station is connected via HDMI, sleep/wake can result in visual noise.”

It might be worth waiting for Valve to work out the kinks of its dock before investing in one. And while you’re waiting, research other options that might better suit your needs.

Allisa has been freelancing at TechRadar for nine months before joining as a Computing Staff Writer. She mainly covers breaking news and rumors in the computing industry, and does reviews and featured articles for the site. In her spare time you can find her chatting it up on her two podcasts, Megaten Marathon and Combo Chain, as well as playing any JRPGs she can get her hands on.

Read More

Continue Reading


Why doesn’t Bash’s `set -e` do what I expected?




Why doesn’t set -e (or set -o errexit, or trap ERR) do what I expected?

set -e was an attempt to add “automatic error detection” to the shell. Its goal was to cause the shell to abort any time an error occurred, so you don’t have to put || exit 1 after each important command. This does not work well in practice.

The goal of automatic error detection is a noble one, but it requires the ability to tell when an error actually occurred. In modern high-level languages, most tasks are performed by using the language’s builtin commands or features. The language knows whether (for example) you tried to divide by zero, or open a file that you can’t open, and so on. It can take action based on this knowledge.

But in the shell, most of the tasks you actually care about are done by external programs. The shell can’t tell whether an external program encountered something that it considers an error — and even if it could, it wouldn’t know whether the error is an important one, worthy of aborting the entire program, or whether it should carry on.

The only information conveyed to the shell by the external program is an exit status — by convention, 0 for success, and non-zero for “some kind of error”. The developers of the original Bourne shell decided that they would create a feature that would allow the shell to check the exit status of every command that it runs, and abort if one of them returns non-zero. Thus, set -e was born.

But many commands return non-zero even when there wasn’t an error. For example,

if [ -d /foo ]; then ...; else ...; fi

If the directory doesn’t exist, the [ command returns non-zero. Clearly we don’t want to abort when that happens — our script wants to handle that in the else part. So the shell implementors made a bunch of special rules, like “commands that are part of an if test are immune”, and “commands in a pipeline, other than the last one, are immune”.

These rules are extremely convoluted, and they still fail to catch even some remarkably simple cases. Even worse, the rules change from one Bash version to another, as Bash attempts to track the extremely slippery POSIX definition of this “feature”. When a SubShell is involved, it gets worse still — the behavior changes depending on whether Bash is invoked in POSIX mode. Another wiki has a page that covers this in more detail. Be sure to check the caveats.

A reference comparing behavior across various historical shells also exists.

Story time

Consider this allegory, originally posted to bug-bash:

Once upon a time, a man with a dirty lab coat and long, uncombed hair
showed up at the town police station, demanding to see the chief of
police.  "I've done it!" he exclaimed.  "I've built the perfect
criminal-catching robot!"

The police chief was skeptical, but decided that it might be worth
the time to see what the man had invented.  Also, he secretly thought,
it might be a somewhat unwise move to completely alienate the mad
scientist and his army of hunter robots.

So, the man explained to the police chief how his invention could tell
the difference between a criminal and law-abiding citizen using a
series of heuristics.  "It's especially good at spotting recently
escaped prisoners!" he said.  "Guaranteed non-lethal restraints!"

Frowning and increasingly skeptical, the police chief nevertheless
allowed the man to demonstrate one robot for a week.  They decided that
the robot should patrol around the jail.  Sure enough, there was a
jailbreak a few days later, and an inmate digging up through the
ground outside of the prison facility was grabbed by the robot and
carried back inside the prison.

The surprised police chief allowed the robot to patrol a wider area.
The next day, the chief received an angry call from the zookeeper.
It seems the robot had cut through the bars of one of the animal cages,
grabbed the animal, and delivered it to the prison.

The chief confronted the robot's inventor, who asked what animal it
was.  "A zebra," replied the police chief.  The man slapped his head and
exclaimed, "Curses!  It was fooled by the black and white stripes!
I shall have to recalibrate!"  And so the man set about rewriting the
robot's code.  Black and white stripes would indicate an escaped
inmate UNLESS the inmate had more than two legs.  Then it should be
left alone.

The robot was redeployed with the updated code, and seemed to be
operating well enough for a few days.  Then on Saturday, a mob of
children in soccer clothing, followed by their parents, descended
on the police station.  After the chaos subsided, the chief was told
that the robot had absconded with the referee right in the middle of
a soccer game.

Scowling, the chief reported this to the scientist, who performed a
second calibration.  Black and white stripes would indicate an escaped
inmate UNLESS the inmate had more than two legs OR had a whistle on
a necklace.

Despite the second calibration, the police chief declared that the robot
would no longer be allowed to operate in his town.  However, the news
of the robot had spread, and requests from many larger cities were
pouring in.  The inventor made dozens more robots, and shipped them off
to eager police stations around the nation.  Every time a robot grabbed
something that wasn't an escaped inmate, the scientist was consulted,
and the robot was recalibrated.

Unfortunately, the inventor was just one man, and he didn't have the
time or the resources to recalibrate EVERY robot whenever one of them
went awry.  The robot in Shangri-La was recalibrated not to grab a
grave-digger working on a cold winter night while wearing a ski mask,
and the robot in Xanadu was recalibrated not to capture a black and
white television set that showed a movie about a prison break, and so
on.  But the robot in Xanadu would still grab grave-diggers with ski
masks (which it turns out was not common due to Xanadu's warmer climate),
and the robot in Shangri-La was still a menace to old televisions (of
which there were very few, the people of Shangri-La being on the average
more wealthy than those of Xanadu).

So, after a few years, there were different revisions of the
criminal-catching robot in most of the major cities.  In some places,
a clever criminal could avoid capture by wearing a whistle on a string
around the neck.  In others, one would be well-advised not to wear orange
clothing in certain rural areas, no matter how close to the Harvest
Festival it was, unless one also wore the traditional black triangular
eye-paint of the Pumpkin King.

Many people thought, "This is lunacy!"  But others thought the robots
did more good than harm, all things considered, and so in some places
the robots are used, while in other places they are shunned.

The end.


Or, “so you think set -e is OK, huh?”

Exercise 1: why doesn’t this example print anything?

   2 set -e
   3 i=0
   4 let i++
   5 echo "i is $i"

Exercise 2: why does this one sometimes appear to work? In which versions of bash does it work, and in which versions does it fail?

   2 set -e
   3 i=0
   4 ((i++))
   5 echo "i is $i"

Exercise 3: why aren’t these two scripts identical?

   2 set -e
   3 test -d nosuchdir && echo no dir
   4 echo survived
   2 set -e
   3 f() { test -d nosuchdir && echo no dir; }
   4 f
   5 echo survived

Exercise 4: why aren’t these two scripts identical?

   1 set -e
   2 f() { test -d nosuchdir && echo no dir; }
   3 f
   4 echo survived
   1 set -e
   2 f() { if test -d nosuchdir; then echo no dir; fi; }
   3 f
   4 echo survived

Exercise 5: under what conditions will this fail?

   1 set -e
   2 read -r foo < configfile


But wait, there’s more!

Even if you use expr(1) (which we do not recommend — use arithmetic expressions instead), you still run into the same problem:

   1 set -e
   2 foo=$(expr 1 - 1)
   4 echo survived

Subshells from command substitution unset set -e, however (unless inherit_errexit is set with Bash 4.4):

   1 set -e
   2 foo=$(expr 1 - 1; true)
   4 echo survived

Note that set -e is not unset for commands that are run asynchronously, for example with process substitution:

   1 set -e
   2 mapfile foo < <(true; echo foo)
   3 echo ${foo[-1]} 
   4 mapfile foo < <(false; echo foo)
   5 echo ${foo[-1]} 

Another pitfall associated with set -e occurs when you use commands that look like assignments but aren’t, such as export, declare, typeset or local.

   1 set -e
   2 f() { local var=$(somecommand that fails); }
   3 f    
   5 g() { local var; var=$(somecommand that fails); }
   6 g    

In function f, the exit status of somecommand is discarded. It won’t trigger the set -e because the exit status of local masks it (the assignment to the variable succeeds, so local returns status 0). In function g, the set -e is triggered because it uses a real assignment which returns the exit status of somecommand.

A particularly dangerous pitfall with set -e is combining functions with conditionals. The following snippets will not behave the same way:

   1 set -e
   2 f() { false; echo "This won't run, right?"; }
   3 f
   4 echo survived
   1 set -e
   2 f() { false; echo "This won't run, right?"; }
   3 if f; then  
   4     echo survived
   5 fi

As soon as a function is used as a conditional (in a list or with a conditional test or loop) set -e stops being applied within the function. This may not only cause code to unexpectedly start executing in the function but also change its return status!

Using Process substitution, the exit code is also discarded as it is not visible from the main script:

   1 set -e
   2 cat <(somecommand that fails)
   3 echo survived

Using a pipe makes no difference, as only the rightmost process is considered:

   1 set -e
   2 somecommand that fails | cat -
   3 echo survived

set -o pipefail is a workaround by returning the exit code of the first failed process:

   1 set -e -o pipefail
   2 failcmd1 | failcmd2 | cat -
   4 echo survived

though with pipefail in effect, code like this will sometimes cause an error, depending on whether the output of somecmd exceeds the size of the pipe buffer or not:

   1 set -e -o pipefail
   2 somecmd | head -n1
   4 echo survived

So-called strict mode

In the mid 2010s, some people decided that the combination of set -e, set -u and set -o pipefail should be used by default in all new shell scripts. They call this unofficial bash strict mode, and they claim that it “makes many classes of subtle bugs impossible” and that if you follow this policy, you will “spend much less time debugging, and also avoid having unexpected complications in production”.

As we’ve already seen in the exercises above, these claims are dubious at best. The behavior of set -e is quite unpredictable. If you choose to use it, you will have to be hyper-aware of all the false positives that can cause it to trigger, and work around them by “marking” every line that’s allowed to fail with something like ||true.


GreyCat‘s personal recommendation is simple: don’t use set -e. Add your own error checking instead.

rking’s personal recommendation is to go ahead and use set -e, but beware of possible gotchas. It has useful semantics, so to exclude it from the toolbox is to give into FUD.

geirha’s personal recommendation is to handle errors properly and not rely on the unreliable set -e.

Read More

Continue Reading


Copyright © 2022 Xanatan