Deep Space Nine Upscale Project (DS9UP): Technical Goals and FAQ
I’ve been needing to write a sort of “mission statement” and FAQ for this project, and this seems as good a time as any.
The goal of the Deep Space Nine Upscale Project (DS9UP) is to create a version of Star Trek: Deep Space Nine worth watching in the modern era of 4K and HD televisions and monitors. Topaz Video Enhance AI has been a critical part of those efforts to date, but I’ve spent the last month testing AviSynth, StaxRip, Handbrake, and at least a dozen other applications in the process of creating this encode.
Because image quality is intrinsically suggestive, I’m not going to claim I can somehow create the “best” version of Deep Space Nine, but I intend to map out multiple paths and settings tweaks that lead to different outcomes. I’ve gone back and specifically focused on the DVD encodes for two reasons:
1). We need every single scrap of data for upscaling (our results bear out the importance of this in several spots)
2). I’m creating a legal route for individuals to upscale a TV show they already own. I will not be creating or distributing any torrents based on my own work. I will be publishing a full tutorial on how to create what I’ve created, once we’ve reached that point.
To date, I’ve used two workstations simultaneously for DS9UP testing: An AMD Threadripper 3990X and an Intel Core i9-10980XE, both equipped with 64GB of RAM and an RTX 2080. I’ve occasionally tapped additional processing power in the form of eight-core Intel and AMD systems. Both systems have been excellent, but the 3990X is particularly good at running many encodes in parallel.
The current encoder preset I’m working with and wrote about earlier this week is codenamed “Rubicon,” after the runabout.
I have far more compute power available than is typical for a project like this, and I’m using it accordingly.
Before we go further, here’s some surprise Season 7 footage I created.
Goals and Principles
The final upscaling and filter application process must be as simple as possible, to increase the likelihood people can follow it. No individual scene edits unless absolutely unavoidable.
Upscaling should require as little re-encoding as possible, to reduce source degradation.
When encoding cannot be avoided, re-encode in maximum detail. Storage is cheap and Topaz Video Enhance AI offers no control whatsoever over final output settings. Err on the side of caution.
Create a minimum of two workflows balanced around maximum quality versus sane processing time requirements.
The upscale should rely on the largest amount of free-to-use software possible. Topaz isn’t free to use, but it does include a 30-day free trial.
The non-upscaled video should still improve the underlying source image quality before upscaling is applied.
When in doubt, encode it, and compare it.
Render all results at near-maximum quality. When there are questions about what maximum quality settings are, encode all of the likely options simultaneously. When combinatorics makes this impossible, choose likely targets based on a close reading of the various filter settings.
Be willing to laugh at some of the ridiculously bad quality encodes you will occasionally create, especially if it takes 1-2 days to create them.
Address slow rendering times by leveraging greater parallelism. If you’re finishing 15-20 encodes per day, it won’t matter if it takes 12-36 hours to finish them.
Encode the entire episode at once, for easier spot comparison of any area.
Rubicon is not a perfect example of its own goals and principles. Like a lot of first season efforts, it needs further refinement. Currently, it relies on multiple pieces of paid software and the source is encoded more than I like. I’ve also been forced to use Handbrake as an initial ripper rather than MakeMKV due to persistent problems with audio/video muxing. Handbrake has no such issue and I’ve noticed no meaningful quality loss from a Handbrake rip on “Very Slow” with an RF of 2.
There are some nasty dependencies to contend with across applications. AviSynth doesn’t always like manipulating a video after it has been through DaVinci Resolve Studio and Topaz. DaVinci won’t ingest MKV files and doesn’t support MPEG-2 at all.
Currently, Rubicon uses Handbrake for the initial rip, followed by StaxRip 22.214.171.124 as an AviSynth front-end GUI. After processing via AviSynth, I upscale the application in Topaz VEAI. This creates an intermediate step I personally call 5Sharp, mostly because “That one encode I like” was wordy.
5Sharp is rather nice, IMO, but it struggles to resolve the judder issues caused by DS9’s party trick of flipping back and forth between 23.976 fps and 29.97 fps. I’ve come up with two methods of resolving this issue — the one currently deployed in Rubicon uses DaVinci Resolve Studio, while another option I’m considering relies solely on AviSynth.
The reason I’ve been a little vague about my workflows isn’t that I’m trying to be coy. It’s because it’s virtually impossible to describe them all without sounding like a lunatic. In the past week, I’ve experimented with the following:
Encoding from VOB files created by DVDDecrypter
Encoding from MKV files created by Handbrake
Encoding from MP4 files created by Handbrake
Encoding from MKV files created by MakeMKV
I’ve encoded the VOB files at 23.976 and 29.97 fps to see the differences, experimented with various ways of extracting MKV timecodes in the hopes of fixing my A/V sync issue when ingesting via MakeMKV (no luck), attempted to use VapourSynth and StaxRip to invoke the VFRtoCFR script (no luck and I don’t speak Python), and experimented with multiple methods of adjusting frame rates in multiple applications. In video editing, doing A before B often produces different results than B before A, so I’ve also experimented with reversing the order of my own tests.
I’ve run various methods of adjusting frame rates on all of the sources above, to gauge the impact and evaluate how it impacts different source rips differently. I’m not “settled” on using Handbrake for initial ripping in any meaningful way, except that starting with Handbrake gets me aligned audio and video without locking me into either 23.976fps or 29.97fps the way ripping the VOB files currently does.
Are there solutions to these problems? I’m certain there are solutions to these problems. What I don’t want to do is leave stories littered with half-explained workflow questions that represent discarded branches of research.
Now that I’ve finished my most recent massive report, I plan to hook up with some of the other grassroots work being done on this project.
Why are you using Handbrake instead of MakeMKV?
I’d love to be using MakeMKV. In fact, I’ve rendered MakeMKV-based source hundreds of times. There are two problems with MakeMKV that I have yet to solve.
1). Misaligned audio/video at the beginning of a stream.
2). How Deep Space Nine‘s variable frame rate is handled by many applications.
StaxRip, for example, will attempt to rip a MakeMKV stream into a hybrid constant frame rate (CFR) of 24.66fps, having apparently averaged the 29.97fps content frame rate with the 23.976fps content frame rate. Ripping the VOBs directly is possible — and this solves the audio sync problem — but this also forces the show into all one frame rate or the other.
Also, Blackmagic’s DaVinci Resolve doesn’t support MPEG-2 or MKV files, which is rather frustrating.
Tool recommendations are welcome.
How Long Does it Take to Encode an Entire Episode?
Honest answer: I’m not sure. It takes between 8.5 – 11 hours just to upscale a DVD source file by 4x. This is sometimes referred to as “4K,” but the final resolution on Rubicon is 2560×1920. It’s roughly a 5MP image. The reason I’m not sure how long it actually takes is that I typically run between 4 – 10 source encodes simultaneously with an upscale in the background.
Right now, I’d say it takes anywhere from 13 – 20 hours to upscale an episode, start to finish. 8-11 hours of that is out of my control. Topaz VEAI takes as long as it takes.
Do You Actually Know What You’re Doing?
I’m still a beginner at this sort of thing. In retrospect, there are easier shows to cut one’s teeth on than Deep Space Nine. My strategy for conquering this problem has been spunk, gumption, and overwhelming amounts of processing power.
Are You Aware You’re Doing It Wrong?
I am exquisitely aware that I am doing it wrong. I’ve been attempting to learn how to upscale and effectively remaster video from scratch, with some help from online and real-life friends. I take to video like a duck to vacuum. If you think you know something that might help, there’s a pretty good chance you do.
I’m not tackling this project because I think I’m somehow immune to the Dunning-Kruger effect. I’m taking it on because DS9 is 27 years old and nobody has done it yet. Paramount has made it clear they aren’t going to. We’ve started to lose the actors who starred on the show.
For the first time in my entire career, the tools to fix problems like this have become available to ordinary people. I and some other groups of people are availing ourselves of them.
You’ll Never Make It as Good as Paramount Could:
This one really isn’t a question, but I hear it regularly enough to make it worth addressing. It is not news to me that Paramount is capable of creating a remastered version of Deep Space Nine that would blow mine out of the water. Here’s a shot they created for the documentary “What We Left Behind” last year:
And, for comparison, here’s my own version of that clip, rendered in Rubicon — the best footage I’ve assembled to-date:
I know which one you probably prefer. I know which one I prefer. But since Paramount isn’t doing the work, I’ve got to work with what I’ve got.
The goal of the DS9UP isn’t to create a better version of DS9 than Paramount could produce — it’s to create the best version of Deep Space Nine that it’s possible to build (with allowance for individual taste).