Kebe Says - Dan McDonald's Blog

Old Names and New Places

So recently I acquired @danmcd on Twitter. It was a long time coming. I was relatively late in early-adopting twitter: late spring 2009. By then someone else had claimed the handle danmcd, to my chagrin.

I was chagrined (in 2009) because I’ve been danmcd at SOMEWHERE since 1988. First .edus, even a .gov and .mil, and of course a series of .coms including my own kebe.com

(Who and/or what is Kebe might be another blog post in and of itself. In the meantime, this answer will suffice:

Obi-Wan, “it’s me” )

Names are important. Especially in the virtual world, they establish not only presence, but often place as well. I ended up being @kebesays on twitter for a long time. Luckily, Twitter makes handle-swapping relatively easy, so anyone who was following @kebesays got moved over to @danmcd without issue. I still keep /* XXX KEBE SAYS … at the top, because if you see that in my code, it indicates work-in-progress issues; and aren’t we all works-in-progress?

Speaking of names and places: one name and one place that has been associated with Triton and SmartOS - Joyent - will no longer be associated with SmartOS or Triton. Samsung has decided to use other in-house technology for their future, and that work will continue with Joyent. SmartOS and Triton are being spun off to MNX Solutions, where I will be continuing SmartOS development. See the MNX Triton FAQ and my email for more.

Oh and yes, I’ll get to be ‘danmcd‘ at MNX as well.

Standalone SmartOS Gets Selectable PIs

So what happened?

We’ve introduced a requested feature in SmartOS: the ability to select a platform image from loader(4), aka OS-8231.

To enable this feature, you must (using example bootable pool bootpool):

  • Update BOTH the boot bits and the Platform Image to this release. Normally piadm(1M) updates both, so please use either latest or another ISO-using installation.
  • Once booted to this PI, utter piadm activate 20210812T031946Z OR install another ISO-using installation (even if you never use it) to have the new piadm(1M) generate the /bootpool/boot/os/ directory the new modifications to loader require.

This represents a minor flag day because an older piadm(1M) will not update an existing /bootpool/boot/os/ directory. The PI-selection menus live in /bootpool/boot/os/, and will remain in an inconsistent state with older PIs using piadm(1M). It is safe to remove /bootpool/boot/os/ if you wish, as the activated (default) PI always boots correctly modulo actual /bootpool/boot/ corruption regardless.

So Tell Me about the Internals and the os/ Directory!

There were two SmartOS repositories that had changes. The first changeset was in illumos-joyent’s loader(5) Forth files. Alongside some additional support routines, the crux of the change is this addition to the main Joyent loader menu:

\
\ If available, load the "Platform Image Selection" option.
\
try-include /os/pi.rc

If the piadm(1M)-generated file /bootpool/os/pi.rc does not exist, the Joyent loader menu appears as it did prior to this fix.

The os/ Directory and illumos Needing platform/

The os/ directory in a bootable pool’s bootpool/boot filesystem contains directories of Platform Image stamps and the aforementioned pi.rc file.

[root@smartos-efi ~]# piadm list
PI STAMP               BOOTABLE FILESYSTEM            BOOT IMAGE NOW  NEXT 
20210715T010227Z       bootpool/boot                  available  no   no  
20210805T161859Z       bootpool/boot                  available  no   no  
20210812T031946Z       bootpool/boot                  next       yes  yes 
[root@smartos-efi ~]# ls /bootpool/boot/os
20210715T010227Z  20210805T161859Z  pi.rc
[root@smartos-efi ~]# 

Each PI stamp directory contains a single platform symbolic link up to the platform-STAMP directory that contains the PI.

[root@smartos-efi ~]# ls -lt /bootpool/boot/os/20210805T161859Z
total 1
lrwxrwxrwx   1 root     root          31 Aug 12 14:41 platform -> ../../platform-20210805T161859Z
[root@smartos-efi ~]#

The Triton Head Node loader menu has a pointer to the “prior Platform Image” has the explicit path of …/os/STAMP/platform contain the platform image. It was a design mistake of the original standalone SmartOS to not lay out platform image in this manner, but given that piadm(1M) must generate the pi.rc file anyway, it is not much more difficult to add symbolic-link construction as well.

The pi.rc File

The pi.rc file includes an additional menu item for the main Joyent loader screen:

Joyent Loader Screen

It also contains up to three pages of platform images to choose from. Here’s an example of page 1 of 3:

Joyent Loader Screen

The default PI is on every page, and up to five (5) additional PIs can appear per page. This means 16 PIs (default + 3 * 5) can be offered on a loader screen. Every time a platform image is activated, deleted, or added, the piadm(1M) command regenerates the entire os/ directory, including pi.rc.

So How and Why Do I Use This?

  • Temporarily revert to and older Platform Image may be useful to check for regressions or to isolate behavior to a specific release.
  • Developers can use *just* platform-image installations (platform-yyyymmddThhmmssZ.tgz to test their new builds without making the bootable pool unusable.

The piadm list output indicates being booted into a non-default PI by its NOW column:

PI STAMP               BOOTABLE FILESYSTEM            BOOT IMAGE NOW  NEXT 
20210114T041228Z       zones/boot                     available  no   no  
20210114T163038Z       zones/boot                     available  no   no  
20210211T055122Z       zones/boot                     none       no   no  
20210211T163919Z       zones/boot                     none       no   no  
20210224T232633Z       zones/boot                     available  no   no  
20210225T124034Z       zones/boot                     none       no   no  
20210226T213821Z       zones/boot                     none       no   no  
20210311T001742Z       zones/boot                     available  no   no  
20210325T002528Z       zones/boot                     available  no   no  
20210422T002312Z       zones/boot                     available  no   no  
20210520T001536Z       zones/boot                     available  no   no  
20210617T001230Z       zones/boot                     available  no   no  
20210701T204427Z       zones/boot                     available  no   no  
20210715T010227Z       zones/boot                     available  no   no  
20210729T002724Z       zones/boot                     available  no   no  
20210804T003855Z       zones/boot                     available  no   no  
20210805T161859Z       zones/boot                     available  yes  no  
20210812T031946Z       zones/boot                     next       no   yes 

In the above example, the SmartOS machine is booted into 20210805T161859Z, but its default is 20210812T031946Z. It would also look this way if piadm activate 20210812T031946Z was just invoked, as the semantics are the same.

MTV (originally 'MTV: Music Television') Turns 40

That I had to explain MTV's acronym... eeesh.

When Cable TV Was Still Young

Set the wayback machine 40 years plus 6-8 months ago (from the date of this post). Cable TV was rolling out in my suburb of Milwaukee, and it FINALLY arrived at our house. Hurray! We didn't have HBO, but we DID have all of the other fledgling basic cable channels... including Nickelodeon, which was then one of the Warner Amex Satellite Entertainment Company (WASEC) channels. (WASEC, and its progenitor Columbus, Ohio QUBE project, are its own fascinating story.) Nickelodeon mostly had single-digit-aged kids programming, but at night (especially Sunday night) it had a 30-minute show called PopClips, which would play the then mindblowing concept of music videos... or as one friend of mine called them, "Intermissions" (because HBO would play music videos between movies to synch up start times... I didn't have HBO so I trusted him). There is a YouTube narrative video that discusses the show in depth, including its tenuous link to another WASEC channel that was going to start airing 40 years ago today...

I Want My MTV

Anyone sufficiently old knows that MTV stood for Music Television. At midnight US/Eastern time on August 1, 1981, it played its space-program-themed bumper, followed by, "Video Killed the Radio Star" by The Buggles.

Now the local cable company pulled a bit of a dick move with MTV for us. It attached it to HBO. If you didn't have HBO, the cable company scrambled MTV, albeit not as strongly as they did with HBO. They scrambled it by making the picture black-and-white, and cutting out the sound completely. LUCKILY for me, we did have "cable radio" which let us not only get better FM reception, but also the stereo broadcast for MTV. Combine them, and I got to see black-and-white videos with proper sound.

Thanks to people's old videotapes and YouTube, you can watch (modulo a couple of copyright-whiners) the first two hours of MTV here. I'd have embedded this, but I'm guessing the copyright-whiners won that battle too.

There's a lot to unpack about MTV being 40. I'm not going to try too hard in this post, but there are some things that must be acknowledged:

  • MTV was a generation-defining phenomenon for Generation X. I suppose late-wave Boomers (the last of whom were graduating high school or already in college) could make a claim to ownership of MTV's first audience, but as MTV matured, it was very much initially for us Xers.
  • It was initially narrowly focussed. The only Black people you'd see on MTV initially were JJ Jackson or members of The Specials. That changed a couple of years later, however.
  • It spawned at least one knock-off: Friday Night Videos, which unlike MTV didn't require Cable.

Of course MTV doesn't play music videos on it anymore, we have alternatives now: YouTube, DailyMotion, and their ilk. And if you miss your MTV, or want to know what it looked like, you really don't have to look hard; many people have uploaded at least some VHS rips, many alas without music thanks to copyright teardowns. But with artist often putting out their old music on their own YouTube pages, some have taken to curating lists of them. Even NPR has curated the first 100 songs!

All Your Base Are Belong to 20-Somethings, and Solaris 9

Two Decades Ago…

Someone pointed out recently that the famous Internet meme “All your base are belong to us” turned 20 this week. Boy do I feel old. I was still in California, but Wendy and I were plotting our move to Massachusetts.

In AD 2001, S9 Was Beginning

OF COURSE I watched the video back then. The original Shockwave/Flash version on a site that no longer exists. I used my then-prototype Sun Blade 1000 to watch it, on Netscape, on in-development Solaris 9.

I found a bug in the audio driver by watching it. Luckily for me, portions of the Sun bug database were archived and available for your browsing pleasure. Behold bug 4451857. I reported it, and all of the text there is younger me.

The analysis and solution are not in this version of the bug report, which is a shame, because the maintainer (one Brian Botton) was quite responsive, and appreciated the MDB output. He fixed the bug by moving around a not-shown-there am_exit_task() call.

Another thing missing from the bug report is my “Public Summary” which I thought would tie things up nicely. I now present it here:

In A.D. 2001
S9 was beginning.
Brian: What Happen?
Dan: Someone set up us the livelock
Dan: We get signal
Brian: What!
Dan: MDB screen turn on.
Brian: It’s YOU!
4451857: How are you gentleman?
4451857: All your cv_wait() are belong to us.
4451857: You are on the way to livelock.
Brian: What you say?
4451857: You have no chance to kill -9 make your time.
4451857: HA HA HA HA…
Brian: Take off every am_exit_task().
Dan: You know what you doing
Brian: Move am_exit_task().
Brian: For great bugfix!

Goodbye 2020

Pardon my latency

Well, at least I’m staying on track for single-digit blog posts in a year. :)

Okay, seriously, 2020’s pandemic-and-other-chaos tends to distract. Also, I did actually have a few things worth my attention.

RFD 176

The second half of 2020 at work has been primarily about RFD 176 – weaning SmartOS and Triton off of the requirement for a USB key. Phases I (standalone SmartOS) and II (Triton Compute Node) are finished. Phase III (Triton Head Node) is coming along nicely, thanks to real-world testing on Equinix Metal (nee Packet), and I hope to have a dedicated blog post about our work in this space coming in the first quarter 2021.

Follow our progress in the rfd176 branches of smartos-live and sdc-headnode.

Twins & College

My twins are US High School seniors, meaning they’re off to college/university next fall, modulo pandemic-and-other-chaos. This means applications, a little stress, and generally folding in pandemic-and-other-chaos issues into the normal flow of things as well. Out of respect for their privacy and autonomy, I’ll stop here to avoid details each of them can spill on their own terms.

On 2021

Both “distractions” mentioned above will continue into 2021, so I apologize in advance for any lack of content here for my half-dozen readers. You can follow me on any of the socials mentioned on the right, because I’ll post there if the spirit moves me (especially on issues of the moment).

A Request to Security Researchers from illumos

A Gentle Reminder About illumos

A very bad security vulnerability in Solaris was patched-and-announced by Oracle earlier this week. Turns out, we in open-source-descendant illumos had something in the same neighborhood. We can’t confirm it’s the same bug because reverse-engineering Oracle Solaris is off the table.

In general if a vulnerability is an old one in Solaris, there’s a good chance it’s also in illumos. Alex Wilson said it best in this recent tweet:

If you want to see the full history, the first 11 minutes of my talk from 2016’s FOSDEM contains WHY a sufficiently old vulnerability in Solaris 10 and even Solaris 11 may also be in illumos.

Remember folks, Solaris is closed-source under Oracle, even though it used to be open-source during the last years of Sun’s existence. illumos is open-source, related, but NOT the same as Solaris anymore. Another suggested talk covers this rather well, especially if you start at the right part.

The Actual Request

Because of this history and shared heritage, if you’re a security researcher, PLEASE make sure you find one of many illumos distributions, install it, and try your proof-of-concept on that as well. If you find the same vulnerability in illumos, please report it to us via the security@illumos.org mailing alias. We have a PGP key too!

Thank you, and please test your Solaris exploits on illumos too (and vice-versa).

Now you can boot SmartOS off of a ZFS pool

Booting from a zpool

The most recent published biweekly release of SmartOS has a new feature I authored: the ability to manage and boot SmartOS-bootable ZFS pools.

A few people read about this feature, and jumped to the conclusion that the SmartOS boot philosophy, enumerated here:

  • The "/" filesystem is on a ramdisk
  • The "/usr" filesystem is read-only
  • All of the useful state is stored on the zones ZFS pool.

were suddenly thrown out the window. Nope.

This change is the first phase in a plan to not depend on ISO images or USB sticks for SmartOS, or Triton, to boot.

The primary thrust of this specific SmartOS change was to allow installation-time enabling of a bootable zones pool. The SmartOS installer now allows one to specify a bootable pool, either one created during the "create my special pools" shell escape, or just by specifying zones.

A secondary thrust of this change was to allow running SmartOS deployments to upgrade their zones pools to be BIOS bootable (if the pool structure allows booting), OR to create a new pool with new devices (and use zpool create -B) to be dedicated to boot. For example:

smartos# zpool create -B c3t0d0 standalone
smartos# piadm bootable -e standalone
smartos# piadm bootable
standalone                     ==> BIOS and UEFI
zones                          ==> non-bootable
smartos# 

Under the covers

(NOTE: Edited 3 May 2023 to change "1M" man page refs to "8".)

Most of what’s above can be gleaned from the manual page. This section will discuss what the layout of a bootable pool actually looks like, and how the piadm(8) command sets things up, and expects things to BE set up.

Bootable pool basics

The piadm bootable command will indicate if a pool is bootable at all via the setting of the bootfs property on the pool. That gets you the BIOS bootability check, which admittedly is an assumption. The UEFI check happens by finding the disks s0 slice, and seeing if it’s formatted as pcfs, and if the proper EFI System Partition boot file is present.

bootfs layout

For standalone SmartOS booting, bootfs is supposed to be mounted on "/" with the pathname equal to the bootfs name. By convention, we prefer POOL/boot. Let’s take a look:

smartos# piadm bootable zones ==> BIOS and UEFI smartos# piadm list PI STAMP BOOTABLE FILESYSTEM BOOT BITS? NOW NEXT 20200810T185749Z zones/boot none yes yes 20200813T030805Z zones/boot next no no smartos# cd /zones/boot smartos# ls -lt total 9 lrwxrwxrwx 1 root root 27 Aug 25 15:58 platform -> ./platform-20200810T185749Z lrwxrwxrwx 1 root root 23 Aug 25 15:58 boot -> ./boot-20200813T030805Z drwxr-xr-x 3 root root 3 Aug 14 16:10 etc drwxr-xr-x 4 root root 15 Aug 13 06:07 boot-20200813T030805Z drwxr-xr-x 4 root root 5 Aug 13 06:07 platform-20200813T030805Z drwxr-xr-x 4 1345 staff 5 Aug 10 20:30 platform-20200810T185749Z smartos#

Notice that the Platform Image stamp 20200810T185749Z is currently booted, and will be booted the next time. Notice, however, that there are no “BOOT BITS”, also known as the Boot Image, for 20200810T185749Z, and instead the 20200813T030805Z boot bits are employed? This allows a SmartOS bootable pool to update just the Platform Image (ala Triton) without altering loader. If one utters piadm activate 20200813T030805Z, then things will change:

smartos# piadm activate 20200813T030805Z
smartos# piadm list
PI STAMP           BOOTABLE FILESYSTEM            BOOT BITS?   NOW   NEXT  
20200810T185749Z   zones/boot                     none         yes   no   
20200813T030805Z   zones/boot                     next         no    yes  
smartos# ls -lt
total 9
lrwxrwxrwx   1 root     root          27 Sep  2 00:25 platform -> ./platform-20200813T030805Z
lrwxrwxrwx   1 root     root          23 Sep  2 00:25 boot -> ./boot-20200813T030805Z
drwxr-xr-x   3 root     root           3 Aug 14 16:10 etc
drwxr-xr-x   4 root     root          15 Aug 13 06:07 boot-20200813T030805Z
drwxr-xr-x   4 root     root           5 Aug 13 06:07 platform-20200813T030805Z
drwxr-xr-x   4 1345     staff          5 Aug 10 20:30 platform-20200810T185749Z
smartos# 

piadm(8) manipulates symbolic links in the boot filesystem to set versions of both the Boot Image (i.e. loader) and the Platform Image.

Home Data Center 3.0 -- Part 2: HDC's many uses

In the prior post, I mentioned a need for four active ethernet ports. These four ports are physical links to four distinct Ethernet networks. Joyent's SmartOS and Triton characterize these with NIC Tags. I just view them as distinct networks. They are all driven by the illumos igb(7d) driver (hmm, that man page needs updating) on HDC 3.0, and I'll specify them now:

  • igb0 - My home network.
  • igb1 - The external network. This port is directly attached to my FiOS Optical Network Terminal's Gigabit Ethernet port.
  • igb2 - My work network. Used for my workstation, and "external" NIC Tag for my work-at-home Triton deployment, Kebecloud.
  • igb3 - Mostly unused for now, but connected to Kebecloud's "admin" NIC Tag.
The zones abstraction in illumos allows not just containment, but a full TCP/IP stack to be assigned to each zone. This makes a zone feel more like a proper virtual machine in most cases. Many illumos distros are able to run a full VMM as the only process in a zone, which ends up delivering a proper virtual machine. As of this post's publication, however, I'm only running illumos zones, not full VM ones. Here's their list:
(0)# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              ipkg     shared
   1 webserver        running    /zones/webserver               lipkg    excl  
   2 work             running    /zones/work                    lipkg    excl  
   3 router           running    /zones/router                  lipkg    excl  
   4 calendar         running    /zones/calendar                lipkg    excl  
   5 dns              running    /zones/dns                     lipkg    excl  
(0)# 
Their zone names correspond to their jobs:
  • global - The illumos global zone is what exists even in the absence of other zones. Some illumos distros, like SmartOS, encourage minimizing what a global zone has for services. HDC's global zone serves NFS and SMB/CIFS to my home network. The global zone has the primary link into the home network. HDC's global zone has no default route, so if any operations that need out-of-the-house networking either go through another zone (e.g. DNS lookups), or a defaut route must be temporarily added (e.g. NTP chimes, `pkg update`).
  • webserver - Just like the name says, this zone hosts the web server for kebe.com. For this zone, it uses lofs(7FS), the loopback virtual file system to inherit subdirectories from the global zone. I edit blog entries (like this one) for this zone via NFS from my laptop. The global zone serves NFS, but the files I'm editing are not only available in the global zone, but are also lofs-mounted into the webserver zone as well. The webserver zone has a vnic (see here for details about a vnic, the virtual network interface controller) link to the home network, but has a default route, and the router zone's NAT (more later) forwards ports 80 and 443 to this zone. Additionally, the home network DHCP server lives here, for no other reason than, "it's not the global zone."
  • work - The work zone is new in the past six years, and as of recently, eschews lofs(7FS) for delegated ZFS datasets. A delegated ZFS dataset, a proper filesystem in this case, is assigned entirely to the zone. This zone also has the primary (and only) link to the work network, a physical connection (for now unused) to my work Triton's admin network, and an etherstub vnic (see here for details about an etherstub) link to the router zone. The work zone itself is a router for work network machines (as well as serves DNS for the work network), but since I only have one public IP address, I use the etherstub to link it to the router zone. The zone, as of recent illumos builds, can further serve its own NFS. This allows even less global-zone participation with work data, and it means work machines do not need backchannel paths to the global zone for NFS service. The work zone has a full illumos development environment on it, and performs builds of illumos rather quickly. It also has its own Unbound (see the DNS zone below) for the work network.
  • router - The router zone does what the name says. It has a vnic link to the home network and the physical link to the external network. It runs ipnat to NAT etherstub work traffic or home network traffic to the Internet, and redirects well-known ports to their respective zones. It does not use a proper firewall, but has IPsec policy in place to drop anything that isn't matched by ipnat, because in a no-policy situation, ipnat lets unmatched packets arrive on the local zone. The router zone also runs the (alas still closed source) IKEv1 daemon to allow me remote access to this server while I'm remote. It uses an old test tool from the pre-Oracle Sun days a few of you half-dozen readers will know by name. We have a larval IKEv2 out in the community, and I'll gladly switch to that once it's available.
  • calendar - Blogged about when first deployed, this zone's sole purpose is to serve our calendar both internally and externally. It uses the Radicale server. Many of my complaints from the prior post have been alleviated by subsequent updates. I wish the authors understood interface stability a bit better (jumping from 2.x to 3.0 was far more annoying than it needed to be), but it gets the job done. It has a vnic link to the home network, a default route, and gets calendaring packets shuffled to it by the router zone so my family can access the calendar wherever we are.
  • dns - A recent switch to OmniOSce-supported NSD and Unbound encouraged me to bring up a dedicated zone for DNS. I run both daemons here, and have the router zone redirect public kebe.com requests here to NSD. The Unbound server services all networks that can reach HDC. It has a vnic link to the home network, and a default route.

The first picture shows HDC as a single entity, and its physical networks. The second picture shows the zones of HDC as Virtual Network Machines, which should give some insight into why I call my home server a Home Data Center.

HDC, physically HDC, logically

Home Data Center 3.0 -- Part 1: Back to AMD

Twelve years ago I built my first Home Data Center (HDC). Six years ago I had OmniTI's Supermicro rep put together the second one.

Unlike last time, I'm not going to recap the entirety of HDC 2.0. I will mention briefly that since its 2014 inception, I've only upgraded its mirrored spinning-rust disk drives twice: once from 2TB to 6TB, and last year from 6TB to 14TB. I'll detail the current drives in the parts list.

Like last time, and the time before it, I started with a CPU in mind. AMD has been on a tear with Ryzen and EPYC. I still wanted low-ish power, but since I use some of HDC's resources for work or the illumos community, I figured a core-count bump would be worth the cost of some watts. Lucky me, the AMD Ryzen 7 3700x fit the bill nicely: Double the cores & threads with a 20W TDP increase.

Unlike last time, but like the time before it, I built this one from parts myself. It took a little digging, and I made one small mistake in parts selection, but otherwise it all came together nicely.

Parts

  • AMD Ryzen 7 3700x - It supports up to 128GB of ECC RAM, it's double the CPU of the old HDC for only 50% more TDP wattage. It's another good upgrade.
  • Noctua NH-U12S (AM4 edition) CPU cooler - I was afraid the stock cooler would cover the RAM slots on the motherboard. Research suggested the NH-U12S would prevent this problem, and the research panned out. Also Noctua's support email, in spite of COVID, has been quite responsive.
  • ASRock Rack X470D4U - While only having two Gigabit Ethernet (GigE) ports, this motherboard was the only purpose-built Socket AM4 server motherboard. It has IPMI/BMC on its own Ethernet port (but you'll have to double check it doesn't "failover" to your first GigE port). It has four DIMM slots, and with the current BIOS (mine shipped with it), supports 128GB of RAM. There are variants with Two 10 Gigabit Ethernet (10GigE) ports, but I opted for the less expensive GigE one. If I'd wanted to wait, there's a new, not yet available, X570 version, whose more expensive version has both two 10GigE AND two GigE ports, which would saved me from needing...
  • Intel I350 dual-port Gigabit Ethernet card - This old reliable is well supported and tested. It brings me up to the four ethernet ports I need.
  • Nemix RAM - 4x32GB PC3200 ECC Unbuffered DIMMS - Yep, like HDC 2.0, I maxxed out my RAM immediately. 6 years ago I'd said 32GB would be enough, and for the most part that's still true, except I sometimes wish to perform multiple concurrent builds, or memory-map large kernel dumps for debugging. The vendor is new-to-me, and did not have a lot of reviews on Newegg. I ran 2.5 passes of memtest86 against the memory, and it held up under those tests. Nightly builds aren't introducing bitflips, which I saw on HDC 1.0 when it ran mixed ECC/non-ECC RAM.
  • A pair of 500GB Samsung 860 EVO SATA SSDs - These are slightly used, but they are mirrored, and partitioned as follows:
    • s0 -- 256MB, EFI System partiontion (ESP)
    • s1 -- 100GB, rpool for OmniOSce
    • s2 -- 64GB, ZFS intent log device (slog)
    • s3 -- 64GB, unused, possible future L2ARC
    • s4 -- 2GB, unused
    • The remaining 200-something GB is unassigned, and fodder for the wear-levellers. The motherboard HAS a pair of M.2 connectors for NVMe or SATA SSDs in that form-factor, but these were hand-me-downs, so free.
  • A pair of Western Digital Ultrastar (nee HGST Ultrastar) HC530 14TB Hard Drives - These are beasts, and according to Backblaze stats released less than a week ago, its 12TB siblings hold up very well with respect to failure rates.
  • Fractal Design Meshify C case - I'd mentioned a small mistake, and this case was it. NOT because the case is bad... the case is quite good, but because I bought the case thinking I needed to optimize for the microATX form factor, and I really didn't need to. The price I paid for this was the inability to ever expand to four 3.5" drives if I so desire. In 12 years of HDC, though, I've never exceeded that. That's why this is only a small mistake. The airflow on this case is amazing, and there's room for more fans if I ever need them.
  • Seasonic Focus GX-550 power supply - In HDC 1.0, I had to burn through two power supplies. This one has a 10 year warranty, so I don't think I'll have to stress about it.
  • OmniOSce stable releases - Starting with HDC 2.0, I've been running OmniOS, and its community-driven successor, OmniOSce. The every-six-month stable releases strike a good balance between refreshes and stability.

I've given two talks on how I use HDC. Since the last of those was six years ago, I'm going to stop now, and dedicate the next post to how I use HDC 3.0.

Now self-hosted at kebe.com

Let's throw out the first pitch.

I've moved my blog over from blogspot to here at kebe.com. I've recently upgraded the hardware for my Home Data Center (the subject of a future post), and while running the Blahg software doesn't require ANY sort of hardware upgrade, I figured since I had the patient open I'd make the change now.

Yes it's been almost five years since last I blogged. Let's see, since the release of OmniOS r151016, I've:

  • Cut r151018, r151020, and r151022.
  • Got RIFfed from OmniTI.
  • Watched OmniOS become OmniOSce with great success.
  • Got hired at Joyent and made more contributions to illumos via SmartOS.
  • Tons more I either wouldn't blog about, or just plain forgot to mention.
So I'm here now, and maybe I'll pick up again? The most prolific year I had blogging was 2007 with 11 posts, with 2011 being 2nd place with 10. Not even sure if I *HAVE* a half-dozen readers anymore, but now I have far more control over the platform (and the truly wonderful software I'm using).

While Blahg supports comments, I've disabled them for now. I might re-enabled them down the road, but for now, you can find me on one of the two socials on the right and comment there.

Dan's blog is powered by blahgd