Apple claims the new M2 chip has the following specs.
We all know that these numbers are probably a little fluffy. Maybe a lot fluffy, and in practical applications, they are probably pretty far off. Benchmarking in a lab is fine, but the numbers rarely reflect real-world performance.
After Gregorio posted this image earlier this week, it sparked a fair amount of discussion on the interwebs about the memory transfer speed of a 6502 processor.
The 6502 on Commodore machines shares the clock with the video chip. Since dual ported ram wasn’t financially feasible at the time, they chose a memory access trick that allowed both the video chip and processor to access memory during a single clock cycle. I think it’s the same on most Commodores, but on the VIC-20, the processor accesses the memory on the low part of the signal and the VIC chip on the high part. Maybe that’s backward… anyhoo, you get the point.
Memory at 1MB per second
Going back to the slide, this 1Mhz memory bandwidth is what folks are questioning.
On every clock cycle, the 6502 reads memory from somewhere… the stack, registers, program counter, memory locations, etc. So at 1 Mhz, typical for Commodore machines, this 1MB per second bandwidth is probably accurate in a vacuum, where marketing people hang out.
It’s important to note that Gregorio Naçu‘s slide was a parody and not intended to be a hard numbers accurate kind of thing. Please remember that because if you don’t, the rest of this discussion will ruffle your feathers.
We’ll try some memory transfers to get an idea of what actual transfer speeds might look like using standard Commodore hardware. Other 6502-based platforms might be faster or slower, so I encourage you to try some tests of your own, and please let me know what you find.
Again, remember that transferring memory takes more clock cycles than just reading or writing…
Let’s give this a go on the most popular 6502-based system of all time, the Commodore 64.
We’ll take a queue right from the venerable Rodney Zaks.
Incidentally, Robin did a long video fixing this book’s implementation bug. I’ll be using the revised version as I think it’s a well-established example of doing a real-world block transfer. Sure there may be faster ways, but this is a realistic way, which is what we’re going for.
You can read this excellent chapter on how this works, and Robin’s video goes into it in great detail. Here’s what we’re going to do:
source = $0800
dest = $4800
len = $4000
from = $fb
to = $fd
tmpx = $a6
We can count jiffies on a Commodore to give us an idea of how fast this copy takes. Sure there's a slight overhead in the setup, but I think it's marginal enough that we can ignore it for our purposes.
Okay, that's pretty fast. Since that's 16k transferred, it works out to about 54.6 k per second.
Let's do a bunch of them and see what it comes out as.
We can call this pretty quickly 255 times and do the same math.
So at $1128 jiffies(4392) and 255 transfers of 16,384, we're seeing around 57K per second.
Grain of salt, yes, but real-world enough.
Yeah, there's some overhead in the setup and running of the transfer. We could probably make this loop a few percentage points faster. Maybe if we make it tight, we could get 15% better out of it. But the point was real-world uses, and this is a pretty good example of a tight but flexible loop to transfer. Let's not get TOO pedantic here.
What's important to note is that transferring memory takes several clock cycles per byte. If we count them, it's about a dozen cycles, which tracks roughly with our results.
The KIM-1 is arguably the most simple and pure 6502 platform, so it will be interesting to try and do memory transfers on it.
It IS clocked a little slower than a Commodore 64, so I expect it to transfer slightly slower. But it doesn't have to compete for access time as VIC-II "badlines," so maybe it'll be pretty close.
Let's find out.
I don't own a "real" KIM-1, but I do own what is considered the best two clones. Today, let's use the Corsham KIM-1 Clone. I'm going to call it a KIM-1 from here forward, mostly because I enjoy getting angry letters about this. You've been warned.
The KIM-1 doesn't have a jiffy clock like the other Commodore machines.
The "Application ports" are easily accessible, so if we set a pin high when we start and set it low again when we finish, we can easily use an oscilloscope to measure the time.
With the expansion bus hooked up on my Corsham KIM board, the Application port A direction is set to output with.
And then, we can toggle pin PA0 by setting it high or low. We'll use $FF and $0 for that for simplicity.
Side note: this is a non-standard location for this port, your KIM-1 or clone probably has it in the $1700 range. Check your documentation.
16k in 262 Milliseconds is around 62.5k per second. Slightly faster than a Commodore 64 even though an NTSC Commodore 64 runs at a slightly higher clock speed (1.023MHz) than our KIM here.
Let's do this 255 times in a tight loop, ignoring the overhead of things like JSR, which takes a few clock cycles each loop. We're going for a ballpark here.
So our loop code then looks something like
sta $1601 ;technically setting all pins high here
;could just use #$01
Then if we probe it with an oscilloscope, we can measure the 1+ minute square wave.
So 255 transfers of 16,384 bytes take 67 seconds. Or about 62k per second.
I happen to have a Cerberus 2080 board. As far as I know, mine is the only green one in the world.
This has dual-ported RAM and can clock the brand new (yes, they still make them) WDC 65c02S processor at a blazing 8Mhz. Let's see what kind of results we get from it.
Again, we have a no jiffy clock problem, so I'm going to skip right to the 4MB transfer, time it over the video capture, and have it show "done" on the screen when it finishes. Unlike the KIM-1, I don't have a straightforward way to time it with an I/O pin. It'll give us a good enough idea of where we are.
16,384 bytes 255 times took 6.29 seconds, so maxed out, a modern 6502 at 8MHz can do about 664.2k per second. Not too bad!
Sure, this was not a comprehensive set of tests. But in the real world, a 6502 can copy the entire contents of a Commodore 64's memory from one place to another in about a second. Pretty respectable, and it was pretty fast for the time.
You could certainly use self modifying code and unroll this copy routine to get better performance at the price of flexibility and arguably understanding for the average casual 6502 assembly coder.
Again, this was not a "how fast can we absolutely make it" but an everyday use examination.
This copy can handle from one to 216 bytes and every number in between. And as my favorite Youtuber is fond of saying "I know I know, but I didn't do that. Let the angry emails begin."
If you have an REU on your Commodore, that can theoretically swap out the memory at a byte per clock cycle. A true 1MB per second. I heard that games like Sam's Journey make use of this feature quite a bit.
I'd love to hear your thoughts on how you'd approach this, pedantic, nit-picky, and otherwise. Bonus points if you demonstrate methods that show dramatically better results.
Whatever you do, be sure to have fun and don't take marketing slides too seriously.
True to form, Nothing has just announced the full reveal date for its upcoming audio product, Ear (stick).
So, an announcement about an announcement. You’ve got to hand it to Carl Pei’s marketing department, they never miss a trick.
What we’re saying is that although we still have ‘nothing’ conclusive about the features, pricing or release date for the Ear (stick) except an image of another model holding them (and we’ve seen plenty of those traipsing down the catwalk recently), we do have a date – the day when we’ll be granted official access to this information.
That day is October 26. Nothing assures us that on this day we’ll be able to find out everything, including pricing and product specifications, during the online Ear (stick) Reveal, at 3PM BST (which is 10AM ET, or 1AM on Wednesday if you’re in Sydney, Australia) on nothing.tech (opens in new tab).
Any further information? A little. Nothing calls the Ear (stick), which is now the product’s official name, “the next generation of Nothing sound technology”, and its “most advanced audio product yet”.
But that’s not all! Apparently, Ear (stick) are “half in-ear true wireless earbuds that balance supreme comfort with exceptional sound, made not to be felt when in use. They’re feather-light with an ergonomic design that’s moulded to your ears. Delivered in a unique charging case, inspired by classic cosmetic silhouettes, and compactly formed to simply glide into pockets.”
Opinion: I need more than a lipstick-style case
Nothing Ear (stick) – official leaked renders pic.twitter.com/FrhKmRttmiOctober 1, 2022
Aside from this official ‘news’ from Nothing, leaked images and videos of the Ear (stick) have been springing up all over the internet (thank you, developer Kuba Wojciechowski) and they depict earbuds that look largely unchanged, which is a shame.
For me, the focus needs to shift from gimmicks such as a cylindrical case with a red section at the end which twists up like a lipstick. Don’t get me wrong, I love a bit of theater, but only if the sound coming from the earbuds themselves is top dog.
See, that lipstick case shape likely will not support wireless charging. That and the rumored lack of ANC means the Ear (stick) is probably arriving as the more affordable option in Nothing’s ouevre.
For now, we sit tight until October 26.
Becky is a senior staff writer at TechRadar (which she has been assured refers to expertise rather than age) focusing on all things audio. Before joining the team, she spent three years at What Hi-Fi? testing and reviewing everything from wallet-friendly wireless earbuds to huge high-end sound systems. Prior to gaining her MA in Journalism in 2018, Becky freelanced as an arts critic alongside a 22-year career as a professional dancer and aerialist – any love of dance starts with a love of music. Becky has previously contributed to Stuff, FourFourTwo and The Stage. When not writing, she can still be found throwing shapes in a dance studio, these days with varying degrees of success.
You might soon have to buy YouTube Premium to watch 4K YouTube videos, a new user test suggests.
According to a Reddit thread (opens in new tab) highlighted on Twitter by leaker Alvin (opens in new tab), several non-Premium YouTube users have reported seeing 4K resolution (and higher) video options limited to YouTube Premium subscribers on their iOS devices. For these individuals, videos are currently only available to stream in up to 1440p (QHD) resolution.
The apparent experiment only seems to be affecting a handful of YouTube users for now, but it suggests owner Google is toying with the idea of implementing a site-wide paywall for access to high-quality video in the future.
So, after testing up to 12 ads on YouTube for non-Premium users, now some users reported that they also have to get a Premium account just to watch videos in 4K. pic.twitter.com/jJodoAxeDpOctober 1, 2022
It’s no secret that Google has been searching for new ways to monetize its YouTube platform in recent months. In September, the company introduced five unskippable ads for some YouTube users as part of a separate test – an unexpected development that, naturally, didn’t go down well with much of the YouTube community.
A resolution paywall seems a more palatable approach from Google. While annoying, the change isn’t likely to provoke the same level of ire from non-paying YouTube users as excessive ads, given that many smartphones still max out at QHD resolution anyway.
Of course, if it encourages those who do care about high-resolution viewing to invest in the platform’s Premium subscription package, it may also be more lucrative for Google. After all, YouTube Premium, which offers ad-free viewing, background playback and the ability to download videos for offline use, currently costs $11.99 / £11.99 / AU$14.99 per month.
Suffice to say, the subscription service hasn’t taken off in quite the way Google would’ve hoped since its launch in 2014. Only around 50 million users are currently signed up to YouTube Premium, while something close to 2 billion people actively use YouTube on a monthly basis.
Might the addition of 4K video into Premium’s perk package bump up that number? Only time will tell. We’ll be keeping an eye on our own YouTube account to see whether this resolution paywall becomes permanent in the coming months.
Axel is a London-based staff writer at TechRadar, reporting on everything from the newest movies to latest Apple developments as part of the site’s daily news output. Having previously written for publications including Esquire and FourFourTwo, Axel is well-versed in the applications of technology beyond the desktop, and his coverage extends from general reporting and analysis to in-depth interviews and opinion.
Axel studied for a degree in English Literature at the University of Warwick before joining TechRadar in 2020, where he then earned a gold standard NCTJ qualification as part of the company’s inaugural digital training scheme.
USB-C has come a long way since its debut in 2014, now becoming the standard for charging and basic data transfer (on everything except the iPhone, of course!) as well as audio and video for more and more devices. The European Parliament, long enamored with the idea of a consumer- and environmentally-friendly standard for charging devices, is pushing it forward even further. A newly-passed law says that almost all portable electronics will need to charge via USB-C by 2026.
At this point, most new laptops already use USB-C charging, taking advantage of the standard’s flexibility to deliver a range of wattages up to 100 watts. There are two exceptions: the top of the market and the bottom. Cheap budget laptops are still sometimes equipped with less expensive, semi-proprietary barrel charging cables or something like Lenovo’s rectangular charger.
On the other hand, power-hungry laptops that need more than 100 watts still use proprietary connections for their massive adapters. The USB Implementers Forum is working on expanding that limit and some of these laptops can still charge slowly over USB-C. These are the only laptops that Europe will allow to be sold with proprietary chargers after the spring of 2026. While nothing forces manufacturers to follow this new law worldwide, streamlined manufacturing and economy of scale will effectively force the rest of the world to follow in practice if not in legislation.
Parliament posted its reasoning online (spotted by Windows Central), saying that this move will encourage technological innovation and give consumers access to more interoperability with a bonus that more easily-reusable cables and chargers means less electronic waste. The post estimates that it will help consumers save up to 250 million euro a year on new charger purchases.
The bigger news is that this move is likely to finally force Apple to abandon the Lightning connector for the iPhone, cheaper iPads, and a few lingering accessories. (Apple already uses USB-C charging on most iPads and all Macbooks.) The switch for smaller mobile devices will happen by the end of 2024. This includes “all new mobile phones, tablets, digital cameras, headphones and headsets, handheld videogame consoles and portable speakers, e-readers, keyboards, mice, portable navigation systems, earbuds and laptops that are rechargeable via a wired cable.” (Note: This technically creates a loophole for any device that recharges via wireless only.) That should give laptop manufacturers plenty of time to flush out the remaining old-fashioned chargers from their assembly lines.
Michael is a former graphic designer who’s been building and tweaking desktop computers for longer than he cares to admit. His interests include folk music, football, science fiction, and salsa verde, in no particular order.