diff --git a/2006-01-23-ssh-tunnelling-101.html b/2006-01-23-ssh-tunnelling-101.html index 6473d00..f6b8b5e 100644 --- a/2006-01-23-ssh-tunnelling-101.html +++ b/2006-01-23-ssh-tunnelling-101.html @@ -3,7 +3,7 @@ -

The Players

+

The Players

I’ll be referring to 3 hosts:

@@ -13,7 +13,7 @@
  • C: The client.
  • -

    Configuring B

    +

    Configuring B

    Some sshd configuration needs to be done on B before any of this will work. In the sshd_config file (/etc/ssh/sshd_config on Debian):

    @@ -23,7 +23,7 @@ GatewayPorts yes

    Remember to restart sshd after making changes (/etc/init.d/ssh restart).

    -

    Building the Tunnel

    +

    Building the Tunnel

    On A, run:

    @@ -39,7 +39,7 @@ GatewayPorts yes

    As with all shell commands, put a “&” on the end to run it in the background.

    -

    Tunnelling FTP

    +

    Tunnelling FTP

    Due to a trick in the FTP protocol, you can use this tunnelling arrangement but have FTP data connections go directly from A to C, without touching B. This only works with so-called “active” FTP (using the PORT command instead of PASV). C must also be unfirewalled for this to work.

    diff --git a/2006-01-24-finally-sane-mysql-clustering.html b/2006-01-24-finally-sane-mysql-clustering.html index cb2377b..ce60476 100644 --- a/2006-01-24-finally-sane-mysql-clustering.html +++ b/2006-01-24-finally-sane-mysql-clustering.html @@ -48,7 +48,7 @@ SHOW DATABASES;
    SHOW DATABASES;
     
    -

    The AUTO_INCREMENT problem

    +

    The AUTO_INCREMENT problem

    AUTO_INCREMENT-type columns get used in just about every MySQL table. They’re a quick way to build primary keys without thinking. However, there are obvious problems in a multi-master setup (if inserts happen on both servers at the same time, they’ll both get the same ID). The official MySQL solution (start the IDs on both servers at numbers significantly different from each other) is a nasty hack.

    diff --git a/2006-02-02-rebooting-linux-when-it-doesnt-feel-like-it.html b/2006-02-02-rebooting-linux-when-it-doesnt-feel-like-it.html index 3f5639d..66517da 100644 --- a/2006-02-02-rebooting-linux-when-it-doesnt-feel-like-it.html +++ b/2006-02-02-rebooting-linux-when-it-doesnt-feel-like-it.html @@ -1,4 +1,4 @@ - + diff --git a/2006-03-21-redundant-network-interfaces.html b/2006-03-21-redundant-network-interfaces.html index d8798ec..97ce186 100644 --- a/2006-03-21-redundant-network-interfaces.html +++ b/2006-03-21-redundant-network-interfaces.html @@ -18,7 +18,7 @@ Yes? bonding:802.3ad. No? bonding:active-backupbonding:balance-alb. No? bonding:active-backup. -

    STP

    +

    STP

    This bonding method actually uses Linux’s support for interface bridging. If a bridge is set up between two interfaces connected to the same network and spanning tree protocol is activated, one interface will be put into blocking state and won’t pass traffic. This doesn’t aggregate bandwidth between interfaces when both are up, but it has the interesting effect of allowing the server to bridge traffic between the switches if there are no other available connections. Special configuration at the switches is required to prevent it from being used as a link under normal circumstances.

    @@ -66,7 +66,7 @@ brctl showstp br0

    One side of one interface should be blocking.

    -

    bonding

    +

    bonding

    For any of the bonding methods, you’ll need the ifenslave program. In Debian:

    @@ -91,7 +91,7 @@ down ifenslave -d bond0 eth0 eth1
    ifup bond0
     
    -

    bonding:active-backup

    +

    bonding:active-backup

    This bonding mode keeps one interface completely blocked (including not sending ARP replies out it), using it strictly as a backup.

    @@ -102,7 +102,7 @@ down ifenslave -d bond0 eth0 eth1

    Follow the general bonding instructions above, and you’re all set!

    -

    bonding:802.3ad

    +

    bonding:802.3ad

    This bonding mode uses the standardized IEEE 802.3ad bonding method, with a protocol (LACP) for both sides to agree on bonding information. All links must be the same speed and duplex. The balancing method between links is determined by each end; a single connection will only go over one link, and sometimes traffic with a single (ethernet-level) peer will use a single link as well.

    @@ -127,7 +127,7 @@ end

    Then follow the general bonding instructions.

    -

    bonding:balance-alb

    +

    bonding:balance-alb

    This bonding mode balances outgoing traffic accoridng to interface speed and usage. It intercepts and rewrites outgoing ARP replies to make them come from different physical interfaces, tricking the network fabric into balancing incoming traffic as well.

    diff --git a/2006-12-05-fixing-your-home-soho-network.html b/2006-12-05-fixing-your-home-soho-network.html index e2612dc..614a21b 100644 --- a/2006-12-05-fixing-your-home-soho-network.html +++ b/2006-12-05-fixing-your-home-soho-network.html @@ -5,7 +5,7 @@

    This post is going to stray a bit from my usual geeky fare. I’m being asking far too much to help sort out haphazard home network designs that are causing real problems for their users. I decided to collect all the answers that I’ve given together in a single place that I can point to when asked.

    -

    General Principles

    +

    General Principles

    Simplicity

    @@ -19,7 +19,7 @@

    A Cisco Aironet is going to crash less than a Linksys access point. A Cisco switch is going to provide better throughput than a NetGear one. This isn’t a hard-and-fast rule; there are certainly decent, cheap network devices out there. However, generally, if a device has a plastic case and an external power supply and you’ve got more than 3 people depending on it, you’re going to regret the decision.

    -

    Sorting out the devices

    +

    Sorting out the devices

    1. Diagram your network. Knowing what you have and how it’s all connected together is a critical first step toward fixing any of it. A complete diagram looks like:
      @@ -38,7 +38,7 @@ sudo dhclient eth0

      You should see one or more lines starting with DHCPOFFER and telling you where the offer came from. If you see more than one source of offers, you need to eliminate extra DHCP servers.

    -

    Troubleshooting slowness

    +

    Troubleshooting slowness

    By far the most common network issue seems to be nebulous “slowness”. We’ll try to eliminate possibilities one by one.

    @@ -68,11 +68,11 @@ sudo dhclient eth0

    You’ll want to make the window a bit bigger. This application gives you real time timing data following the route from your network to firestuff.org. Watch the average times and loss percentages. Nothing above 0% is really acceptable loss; your ISP will probably claim that it is, but they’re lying. Remember that the numbers are cumulative; if hop 3 is dropping packets, those drops will effect hop 3 and everything beyond it. However, it’s really hop 3’s problem, and if hop 6 has a problem, it’ll be hard to see until you get the closer issue cleared up.

    -

    Troubleshooting idle disconnects

    +

    Troubleshooting idle disconnects

    Do you have long-running connections (SSH, telnet, MySQL, etc.) that get disconnected when they’re not doing anything? It’s your NAT device’s fault. Period. If it doesn’t have a setting to change the maximum idle time for a connection, throw it out and buy one that does.

    -

    Troubleshooting wireless problems

    +

    Troubleshooting wireless problems

    1. Does rebooting your wireless router/access point fix it? Throw it out and buy a real one (I kept buying new Linksys/NetGear products until I gave up and started buying Cisco. Oddly, since then, things work).
    2. diff --git a/2010-04-07-wireless-network-optimization-2010-edition.html b/2010-04-07-wireless-network-optimization-2010-edition.html index ed3c33b..05360fb 100644 --- a/2010-04-07-wireless-network-optimization-2010-edition.html +++ b/2010-04-07-wireless-network-optimization-2010-edition.html @@ -5,39 +5,39 @@

      After getting the Internet connection all tuned up, it's time to talk network speed.

      -

      How fast do you need to go?

      +

      How fast do you need to go?

      Talking about gigabit speeds around the office drew some incredulity. Most users seem to be used to talking about Internet connection speeds in the sub-10mbit range, so a 10mbit hub (which my new apartment came prewired with) and 802.11b (6mbit/s TCP) or at least 802.11g (~30mbit/s TCP) will pretty much suffice. Political arguments about the mess that is US last-mile Internet connections aside, however, there are expensive options at higher speeds. Some areas have FiOS (though Verizon has apparently stopped rolling it out), and Comcast has a 50mbit/s "Extreme" plan in my area for $115/month. DOCSIS 3 supports up to ~160mbit/s link speed. Broadband speeds don't obey Moore's law (mostly due to the enormous infrastructure investment required to deploy new tech), but we'll still probably see cable plans breaking the 100mbit/s barrier in 3 or 4 years max. In short, it's a gigabit ethernet (~700mbit/s in real life) and 802.11n (~150mbit/s with today's gear) world for the short- to medium-term.

      -

      Defining your users

      +

      Defining your users

      My use cases for wireless at home divide pretty neatly into two categories: high-bandwidth, low-latency streaming to fixed points (Mac Minis hooked up to my TVs), and bursty-bandwidth, can-survive-momentary-latency clients that move around a lot (laptop, cellphone, iPad). I'd like both to be able to max out my Internet connection, but the video streaming needs to be able to do better inside my network (streaming from my iMac). This may get murkier once Apple get iTunes streaming to the iPad working.

      -

      The first hop

      +

      The first hop

      No amount of optimization on the wireless side is going to help if the cable modem to router hop can't push the full speed of the 'net connection (this presumes that they're two different devices for you). First, both should support GigE (I have a Motorola SB6120 and a D-Link DIR-855). It's harder than it should be to verify the connection speed between these two; in the end, I had to force the link on the router end to 1000mbit/s-only, then make sure it still connected.

      My apartment isn't wired ideally, so the cable modem and router are in different places. The apartment has ethernet throughout, but it's only wired with 2 pairs (out of the 4 pairs in an RJ45 connector); that's only sufficient for 100mbit/s links. Kacirek brought over the toolkit and we appropriated some telephone wiring to serve as the extra two pairs, replacing the 10mbit hub with an ethernet coupler.

      -

      Going N-only

      +

      Going N-only

      The 802.11b to 802.11g migration was a mess; networks effectively dropped back to all-B in the presence of even a single B device. G to N isn't as bad, but it's not great; 802.11 continues to accumulate backwards-compatibility hacks all over the place. However, I was surprised to find that every device except my old T60 supports N, including my Nexus One. It didn't ship with the support, and Google never indicated that an upgrade to it was forthcoming, but it must have snuck out with a firmware upgrade somewhere. After digging out an old 802.11n mini-PCI card that I bought years ago and upgrading the T60, I was able to switch from G/N to just N. This is probably a significant win, if you can manage to upgrade all your devices. If not, confining the older ones to 2.4GHz (leaving 5GHz to pure-N, rather than A/N) is probably your best bet.

      -

      There's N, then there's N

      +

      There's N, then there's N

      802.11n has to be one of the most consumer-confusing specs ever. N works by using multiple antennas to build virtual "spatial streams". For example, radio one has antennas 1A and 1B; radio two has 2A, 2B and 2C. The silicon supports 2 spatial channels, which get built between, e.g. 1A and 2B, 1B and 2C. These spatial channels are treated as separate links, even though they're on the same frequency. There are 16 possible antenna/radio configurations and 30 antenna/radio/spatial channel configurations. The configurations are abbreviated AxB:C (A: transmit radios/antennas, B: receive radios/antennas, C: processor-supported spatial channels). The spec goes up to 4x4:4. Unfortunately, this means that 3-antenna systems aren't necessarily 3-stream (and most sold today aren't). You can't have more streams than your lowest radio/antenna count, and your maximum speed is determined by your number of streams and frequency width. N can use 20MHz or 40MHz of radio spectrum. The DIR-855 I bought seems to be either 2x3:2 or 3x3:2; 300mbit/s max at 40MHz. It seems to be impossible to buy 3- or 4-stream consumer gear at the moment (and you need client gear to support it, so it wouldn't be too useful).

      -

      2.4GHz vs. 5GHz

      +

      2.4GHz vs. 5GHz

      802.11n makes the frequency choice even harder than it used to be. 2.4GHz is an overpopulated ghetto unless you live on double-digit acreage. It penetrates walls significantly better than 5GHz, but that's a blessing and a curse: you can use it from further away, but your neighbors interfere from further away. Even worse, at 40MHz, 802.11n takes 2 of the 3 non-overlapping 2.4GHz channels. That means that if you can see two or more neighboring access points, you're not getting full speed. The penetration advantages are significant, though: my iPad gets 6mbit/s link speed on 5GHz at the furthest point in my apartment from my access point. At 2.4GHz, it gets 26mbit/s.

      Dual-band solutions help, but you have to be careful. Assign different SSIDs to your 2.4GHz and 5GHz networks, so you can force clients to one or the other. Put things like video streaming in 5GHz, where a neighbor download isn't likely to cause hiccups. Test your other devices at maximum range, and see whether you can live with the 5GHz signal level.

      -

      A little more range

      +

      A little more range

      If you'd like to squeak just a little more range out of your access point, either to be able to use 5GHz where you would've used 2.4GHz, or to be able to reach far-away spots with anything at all, consider replacement antennas. Higher-grade access points support them, and they'll buy you a little bit, though don't expect miracles. I picked up 3 of these, which help a bit without taking it to ridiculous extremes.

      -

      Other optimizations

      +

      Other optimizations

      Location, location, location: put your access point in the middle of your coverage area. It's the simplest thing you can do to get massive speed gains.

      diff --git a/2010-04-10-home-video-rethink.html b/2010-04-10-home-video-rethink.html index 6122b64..f6042b3 100644 --- a/2010-04-10-home-video-rethink.html +++ b/2010-04-10-home-video-rethink.html @@ -3,11 +3,11 @@ -

      Choosing a platform

      +

      Choosing a platform

      There's no shortage of alternatives to the traditional cable box + TV model, from cable provider DVRs to TiVo (yes, people actually still own those) to more obscure offerings like Myka, or MythTV running on your closest whitebox. However, if you want to combine easy content sourcing, central storage/management with streaming, a nice remote control interface and solid, attractive hardware, there's really only one option: for better or worse, Apple's iTunes/Front Row.

      -

      Electronics

      +

      Electronics

      I already had an iMac that had ended up at a common-area computer desk in my apartment. This seemed a reasonable choice for a media server, though I suppose I could've shot for something that had a concept of running headless (another Mac Mini).

      @@ -15,23 +15,23 @@

      As actual displays, I went with LED-backlit Samsung LCDs, for the lower power usage and the light weight for wall hanging. Add in some cables and we're good to go...except that it's all sitting on the floor.

      -

      Wall mounting

      +

      Wall mounting

      Fortunately, my two TVs were wall mount efforts #6 and 7 for Kacirek, so this went really smoothly. I picked up wall mount kits from Monoprice. In short: stud finder, level, pilot holes, lag bolts, bolts to the TV, hang, done. There are even nice wall mount kits for Mac Minis, naturally at more than twice the price of the LCD mounts, since they count as "designer". The Mac Mini power adapter fits really nicely in a cable-management cutout at the back of the TV. Add in an extension cord and some cable management from Fry's, and voila:

      [images lost in Picasa shutdown]

      -

      Front Row love and hate

      +

      Front Row love and hate

      Front Row is, at times, awesome. It remembers pause position across different streaming clients. The interface is simple and useful. Over a fast network, seeking and fast-forward are lightning-quick. It doesn't let you set a default streaming source, but that only adds a couple of clicks.

      Sometimes, it's horribly frustrating. It hangs indefinitely and uninterruptably trying to load remote library contents. It forgets pause position, even on the same machine. None of these are repeatable, so trying to solve them seems nigh-impossible.

      -

      Unofficial content

      +

      Unofficial content

      iTunes also doesn't want you using their fancy toys with torrented files; it won't let you add them to your library, and if you change the file type to work around that, it still won't stream them to remote clients. Fortunately, this is pretty simple to work around. You need Quicktime Pro, which comes with Final Cut Studio, is cheap to buy separately, or can be obtained by whatever means you like. It hides in Utilities once installed, and is easy to confuse with Apple's stripped-down but base-install Quicktime Player. Follow steps 1-4 here, and your torrented content is now draggable into iTunes and streamable to Front Row clients. It doesn't re-encode unless you do steps 5-8, so it's fast and you don't lose quality.

      -

      Automatic wake-up

      +

      Automatic wake-up

      I also wanted waking up the Mini clients to automatically wake up the iMac file server, so I didn't have to leave it running all the time. Again, this isn't too hard. First, pick up SleepWatcher, clearly written and packaged by someone who's never heard of dmg or a Makefile (but it works). Install wakeonlan, a tiny little PERL script that sends Wake-on-LAN magic packets. Then, add something to /etc/rc.wakeup like:

      @@ -45,7 +45,7 @@ done

      Your path to wakeonlan, MAC address (of your fileserver) and packet count (and time) required for your network interface to come online may vary.

      -

      Hello, iPad?

      +

      Hello, iPad?

      It would be really great to be able to pull a Minority Report-style video transfer, moving streaming video seamlessly from a TV to the iPad and walking away with it. This is a pipe dream, however, until Apple decides to actually support streaming on the iPad. Seriously, Apple, I have to plug this thing in and copy the whole video to it to watch it?

      diff --git a/2016-02-24-down_the_epoll_rabbit_hole.html b/2016-02-24-down_the_epoll_rabbit_hole.html index c58dc81..fc04ef7 100644 --- a/2016-02-24-down_the_epoll_rabbit_hole.html +++ b/2016-02-24-down_the_epoll_rabbit_hole.html @@ -7,7 +7,7 @@

      I’ll be showing observed behavior through strace and tcpdump output.

      -

      Setup

      +

      Setup

      Our test environment starts with two sockets connected to each other. There’s also a listening socket, only used to accept the initial connection, and an epoll fd. Both of the connected sockets are added to the epoll watch set, with most possible level-triggered flags enabled.

      @@ -39,7 +39,7 @@ epoll_wait(3, {{EPOLLOUT, {u32=5, u64=5}}, {EPOLLOUT, {u32=6, u64=6}}}, 8, 0) =

      We now have two file descriptors, 5 and 6, that are opposite ends of the same TCP connection. They’re both in the epoll set of epoll file descriptor 3. They’re both signaling writability (EPOLLOUT), and nothing else. All is as expected.

      -

      shutdown(SHUT_RD)

      +

      shutdown(SHUT_RD)

      Now let’s call shutdown(5, SHUT_RD).

      @@ -68,7 +68,7 @@ epoll_wait(3, {{EPOLLIN|EPOLLOUT|EPOLLRDHUP, {u32=6, u64=6}}}, 8, 0) = 1

      Side note: notice that close(5) causes automatic removal of that socket from the epoll set. This is handy, but see dup() below.

      -

      shutdown(SHUT_WR)

      +

      shutdown(SHUT_WR)

      Let’s rewind and test with SHUT_WR (write).

      @@ -103,7 +103,7 @@ epoll_wait(3, {{EPOLLIN|EPOLLOUT|EPOLLERR|EPOLLHUP|EPOLLRDHUP, {u32=6, u64=6}}},

      The only oddity here is that calling close(5) doesn’t change any of the epoll status flags for fd 6. Once you attempt to write to fd 6, however, every flag on the planet starts firing, including EPOLLERR and EPOLLHUP.

      -

      dup()

      +

      dup()

      Rewinding to our setup state again, let’s look at dup().

      @@ -134,7 +134,7 @@ epoll_wait(3, {{EPOLLIN|EPOLLOUT|EPOLLRDHUP, {u32=6, u64=6}}}, 8, 0) = 1

      Here’s crazy town, though. close(5) doesn’t remove it from the epoll set. epoll is waiting for the underlying socket to close, and fd 7’s existence is keeping it alive. Trying to remove fd 5 from the epoll set also fails. The only way to get rid of it seems to be to close(7), which removes both from the set and causes fd 6 to signal EPOLLIN and EPOLLRDHUP.

      -

      shutdown(SHUT_RD) + dup()

      +

      shutdown(SHUT_RD) + dup()

      dup(5)                                  = 7
       epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLIN|EPOLLOUT|EPOLLERR|EPOLLHUP|EPOLLRDHUP, {u32=7, u64=7}}) = 0
      @@ -147,7 +147,7 @@ epoll_wait(3, {{EPOLLIN|EPOLLOUT|EPOLLRDHUP, {u32=5, u64=5}}, {EPOLLOUT, {u32=6,
       
       

      The takeaway here is that shutdown() operates on the underlying socket endpoint, not the file descriptor. Calling shutdown(7, SHUT_RD) causes both fd 5 and 7 to signal EPOLLIN and EPOLLRDHUP.

      -

      shutdown(SHUT_WR) + dup()

      +

      shutdown(SHUT_WR) + dup()

      dup(5)                                  = 7
       epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLIN|EPOLLOUT|EPOLLERR|EPOLLHUP|EPOLLRDHUP, {u32=7, u64=7}}) = 0
      @@ -164,7 +164,7 @@ epoll_wait(3, {{EPOLLOUT, {u32=5, u64=5}}, {EPOLLIN|EPOLLOUT|EPOLLRDHUP, {u32=6,
       
       

      As expected, shutdown(7, SHUT_WR) causes fd 6 to signal EPOLLIN and EPOLLRDHUP.

      -

      Conclusions

      +

      Conclusions

      • If you’re using dup() and epoll, you need to call epoll_ctl(EPOLL_CTL_DEL) before calling close(). It’s hard to imagine getting sane behavior any other way. If you never use dup(), you can just call close().
      • diff --git a/2016-03-01-asynchronous-name-resolution-in-c.html b/2016-03-01-asynchronous-name-resolution-in-c.html index 36be5f8..ca0a174 100644 --- a/2016-03-01-asynchronous-name-resolution-in-c.html +++ b/2016-03-01-asynchronous-name-resolution-in-c.html @@ -5,7 +5,7 @@

        Down another rabbit hole, this time into yet another seemingly simple problem: how do you turn a name into an address that you can connect() to without blocking your thread, in 2016. Let’s survey state of the world:

        -

        getaddrinfo_a()

        +

        getaddrinfo_a()

        Look, it’s exactly what we need! Just joking. It’s has the same resolution behavior as getaddrinfo, but:

        @@ -15,7 +15,7 @@
      • It uses sigevent to notify completion. This gives you a choice between a signal and a new thread. I thought we were doing this to avoid having to create a new thread for each resolution?
      -

      libadns

      +

      libadns

      • It’s GPLed. That’s cool, but it does limit options.
      • @@ -23,7 +23,7 @@
      • It will hand you file descriptors to block on, but you have to ask (using adns_beforeselect()). This is designed for poll(), but doesn’t work well with epoll; it doesn’t tell you when to add and remove fds, so you have to track them yourself (since you can’t iterate an epoll set), diff them since the last result, and change the epoll set. It’s a mess.
      -

      libasyncns

      +

      libasyncns

      • It uses getaddrinfo() underneath, so you get standard behavior. Woo!
      • @@ -31,7 +31,7 @@
      • Its API isn’t too crazy, but you wouldn’t call it simple.
      -

      c-ares

      +

      c-ares

      I failed to find docs for this, but I found a gist with an example. Looks like the API is designed for use with select(), though there’s a hook to get an fd when it’s created, so you might be able to associate it with a query, possibly unreliably. Again, you’d have to recreate getaddrinfo() behavior yourself. Also, this gem is at the top of the header:

      @@ -46,7 +46,7 @@

      So maybe not.

      -

      So now what?

      +

      So now what?

      Maybe we can build something. I really don’t need to write another DNS library in my lifetime (the c-ares Other Libraries page links to my previous one, humorously). Let’s see if we can scope some requirements:

      diff --git a/2016-03-13-raspbian-setup-notes.html b/2016-03-13-raspbian-setup-notes.html index a001413..65afa90 100644 --- a/2016-03-13-raspbian-setup-notes.html +++ b/2016-03-13-raspbian-setup-notes.html @@ -11,7 +11,7 @@

      Start with Raspbian Lite. NOOBS has an extra boot step, and Raspbian full version has a GUI and stuff like Wolfram Engine that you probably don’t want.

      -

      Log in

      +

      Log in

      Use console, or grab the IP from your router’s DHCP client list and:

      @@ -19,7 +19,7 @@ # password "raspberry"
      -

      Expand filesystem

      +

      Expand filesystem

      sudo raspi-config --expand-rootfs
       sudo reboot
      @@ -27,19 +27,19 @@ sudo reboot
       
       

      Wait for reboot. Reconnect as above.

      -

      Update

      +

      Update

      sudo apt-get -y update
       sudo apt-get -y dist-upgrade
       
      -

      Update firmware

      +

      Update firmware

      sudo apt-get -y install rpi-update
       sudo rpi-update
       
      -

      Enable overclock (optional)

      +

      Enable overclock (optional)

      Pis seem to be relatively stable overclocked, even without a heatsink.

      @@ -52,19 +52,19 @@ sudo rpi-update # Select "<No>"
      -

      Disable swap

      +

      Disable swap

      sudo dphys-swapfile uninstall
       
      -

      Create a new user

      +

      Create a new user

      sudo adduser <username>
       # Follow prompts
       sudo usermod --append --groups sudo <username>
       
      -

      SSH in as the new user

      +

      SSH in as the new user

      # ON YOUR PI
       # Find your Pi's current IP, you don't know it
      @@ -82,7 +82,7 @@ scp ~/.ssh/id_ed25519.pub <username>@<ip>:.ssh/authorized_keys
       ssh <username>@<ip>
       
      -

      Lock down sshd

      +

      Lock down sshd

      The SSH server has a lot of options turned on by default for compatibility with a wide range of clients. If you’re connecting only from modern machines, and you’ve gotten public key authentication working as described above (and tested it!), then you can turn off lots of the legacy options.

      @@ -121,7 +121,7 @@ END # Enter password for sudo
      -

      Enable the hardware random number generator

      +

      Enable the hardware random number generator

      Note that hardware random number generators are controversial.

      @@ -130,7 +130,7 @@ echo bcm2835_rng | sudo tee --append /etc/modules sudo apt-get -y install rng-tools -

      Enable the hardware watchdog

      +

      Enable the hardware watchdog

      This has false negatives (failures to reboot when it should) for me, but never false positives.

      @@ -140,28 +140,28 @@ watchdog-device = /dev/watchdog END -

      Enable automatic updates

      +

      Enable automatic updates

      sudo apt-get -y install unattended-upgrades
       sudo dpkg-reconfigure -plow unattended-upgrades
       # Choose "<Yes>"
       
      -

      Disable avahi

      +

      Disable avahi

      You didn’t need mdns, did you?

      sudo systemctl disable avahi-daemon.service
       
      -

      Disable triggerhappy

      +

      Disable triggerhappy

      You didn’t need volume buttons, did you?

      sudo systemctl disable triggerhappy.service
       
      -

      Disable frequency scaling

      +

      Disable frequency scaling

      If you’re not planning to run on battery; this thing is slow enough anyway.

      @@ -171,7 +171,7 @@ GOVERNOR="performance" END -

      Enable lldpd

      +

      Enable lldpd

      This allows you to observe network topology if you have managed switches.

      @@ -181,28 +181,28 @@ DAEMON_ARGS="-c" END -

      Remove the pi user

      +

      Remove the pi user

      Well-known username, well-known password, no thank you.

      sudo deluser pi
       
      -

      Install busybox-syslogd

      +

      Install busybox-syslogd

      You give up persistent syslogs, but you reduce SD writes. You can still run “logread” to read logs since boot from RAM.

      sudo apt-get -y install busybox-syslogd
       
      -

      Reboot

      +

      Reboot

      Test that changes work, and have some (disabling auto-login) take effect.

      sudo reboot
       
      -

      After reboot

      +

      After reboot

      Note that ssh may scream “REMOTE HOST IDENTIFICATION HAS CHANGED!”; that’s a symptom of the sshd_config changes above. Just remove the line from the known_hosts file and reconnect.

      diff --git a/2016-03-13-wifi-client-router-setup.html b/2016-03-13-wifi-client-router-setup.html index 3576545..c37a7c7 100644 --- a/2016-03-13-wifi-client-router-setup.html +++ b/2016-03-13-wifi-client-router-setup.html @@ -15,7 +15,7 @@

      If you’ve got a router at the front of your network that supports static routes, though, you’ve got a conceptually simpler option: build a wireless client router. This is still a lot of moving parts and things to go wrong, but those things are going to be more debuggable when they do.

      -

      Shopping list

      +

      Shopping list

      -

      Create an intermediate key

      +

      Create an intermediate key

      openssl ecparam -name secp384r1 -genkey | openssl ec -aes-256-cbc -out intermediate/private/intermediate.key.pem
       # Create strong intermediate key password
       chmod 400 intermediate/private/intermediate.key.pem
       
      -

      Create an intermediate certificate signing request (CSR)

      +

      Create an intermediate certificate signing request (CSR)

      openssl req -config openssl.cnf -new -key intermediate/private/intermediate.key.pem -out intermediate/csr/intermediate.csr.pem  -subj '/C=US/ST=California/O=XXXX/OU=XXXX Certificate Authority/CN=XXXX Intermediate'
       # Enter intermediate key password
       
      -

      Sign intermediate cert with root key

      +

      Sign intermediate cert with root key

      openssl ca -config openssl.cnf -name ca_root -extensions ext_intermediate -notext -in intermediate/csr/intermediate.csr.pem -out intermediate/certs/intermediate.cert.pem
       # Enter root key password
       chmod 444 intermediate/certs/intermediate.cert.pem
       
      -

      Verify intermediate cert

      +

      Verify intermediate cert

      openssl x509 -noout -text -in intermediate/certs/intermediate.cert.pem
       openssl verify -CAfile root/certs/root.cert.pem intermediate/certs/intermediate.cert.pem
      @@ -197,13 +197,13 @@ openssl verify -CAfile root/certs/root.cert.pem intermediate/certs/intermediate.
       
    3. OK
    4. -

      Create a chain certificate file

      +

      Create a chain certificate file

      cat intermediate/certs/intermediate.cert.pem root/certs/root.cert.pem > intermediate/certs/chain.cert.pem
       chmod 444 intermediate/certs/chain.cert.pem
       
      -

      Create a client key

      +

      Create a client key

      You can substitute “server” for “client” for a server cert.

      @@ -212,19 +212,19 @@ chmod 444 intermediate/certs/chain.cert.pem chmod 400 client/private/test1.key.pem
      -

      Create a client certificate signing request (CSR)

      +

      Create a client certificate signing request (CSR)

      openssl req -config openssl.cnf -new -key client/private/test1.key.pem -out client/csr/test1.csr.pem  -subj '/C=US/ST=California/O=XXXX/OU=XXXX Test/CN=XXXX Test 1'
       
      -

      Sign client cert with intermediate key

      +

      Sign client cert with intermediate key

      openssl ca -config openssl.cnf -extensions ext_client -notext -in client/csr/test1.csr.pem -out client/certs/test1.cert.pem
       # Enter intermediate key password
       chmod 444 client/certs/test1.cert.pem
       
      -

      Verify client cert

      +

      Verify client cert

      openssl x509 -noout -text -in client/certs/test1.cert.pem
       openssl verify -CAfile intermediate/certs/chain.cert.pem client/certs/test1.cert.pem
      @@ -240,7 +240,7 @@ openssl verify -CAfile intermediate/certs/chain.cert.pem client/certs/test1.cert
       
    5. OK
    6. -

      Create a PKCS#12 bundle for the client

      +

      Create a PKCS#12 bundle for the client

      This is an easy(er) way to get all the necessary keys & certs to the client in one package.

      @@ -248,14 +248,14 @@ openssl verify -CAfile intermediate/certs/chain.cert.pem client/certs/test1.cert # Enter both the client key password, and a new password for the export; you'll need to give the latter to the client
      -

      Generate a certificate revocation list (CRL)

      +

      Generate a certificate revocation list (CRL)

      Initially empty. You can also do this for your root CA.

      openssl ca -config openssl.cnf -gencrl -out intermediate/crl/intermediate.crl.pem
       
      -

      Verify certificate revocation list

      +

      Verify certificate revocation list

      openssl crl -in intermediate/crl/intermediate.crl.pem -noout -text
       
      @@ -267,7 +267,7 @@ openssl verify -CAfile intermediate/certs/chain.cert.pem client/certs/test1.cert
    7. Signature algorithm (ecdsa-with-SHA256)
    8. -

      Revoke a certificate

      +

      Revoke a certificate

      Only do this if you need to. Find the certificate:

      diff --git a/2016-03-26-nitrokey-hsm-ec-setup.html b/2016-03-26-nitrokey-hsm-ec-setup.html index 6051916..8127c8d 100644 --- a/2016-03-26-nitrokey-hsm-ec-setup.html +++ b/2016-03-26-nitrokey-hsm-ec-setup.html @@ -13,16 +13,16 @@

      Below are the steps to get the Nitrokey HSM to a working state where it can generate an EC key pair, and (self-)sign a cert with it. Hopefully many of these go away in the future, as support percolates into release versions and distribution packages.

      -

      Hardware & setup

      +

      Hardware & setup

      These instructions were developed and tested on a Raspberry Pi. Base setup instructions are here. You’ll also need a Nitrokey HSM, obviously.

      -

      Install prerequisites

      +

      Install prerequisites

      sudo apt-get install pcscd libpcsclite-dev libssl-dev libreadline-dev autoconf automake build-essential docbook-xsl xsltproc libtool pkg-config git
       
      -

      libccid

      +

      libccid

      You’ll need a newer version of libccid than currently exists in Raspbian Jessie (1.4.22 > 1.4.18). You can download it for your platform here, or use the commands below for an RPi.

      @@ -30,7 +30,7 @@ sudo dpkg -i libccid_1.4.22-1_armhf.deb -

      Install libp11

      +

      Install libp11

      engine_pkcs11 requires >= 0.3.1. Raspbian Jessie has 0.2.8. Debian sid has a package, but you need the dev package as well, so you might as well build it.

      @@ -43,7 +43,7 @@ sudo make install cd .. -

      Install engine_pkcs11

      +

      Install engine_pkcs11

      EC requires engine_pkcs11 >= 0.2.0. Raspbian Jessie has 0.1.8. Debian sid also has a package that I haven’t tested.

      @@ -56,7 +56,7 @@ sudo make install cd .. -

      Install OpenSC

      +

      Install OpenSC

      As of writing (2016/Mar/26), working support for the Nitrokey HSM requires a build of OpenSC that hasn’t made it into a package yet (0.16.0). They’ve also screwed up their repository branching, so master is behind the release branch and won’t work.

      @@ -69,24 +69,24 @@ sudo make install cd .. -

      Misc

      +

      Misc

      sudo ldconfig
       
      -

      Initialize the device

      +

      Initialize the device

      /usr/local/bin/sc-hsm-tool --initialize
       

      If this tells you that it can’t find the device, you probably forgot to update libccid, and need to start over. You’ll need to set an SO PIN and PIN the first time. The SO PIN should be 16 characters, and the PIN 6. Both should be all digits. They can technically be hex, but some apps get confused if they see letters.

      -

      Generate a test EC key pair

      +

      Generate a test EC key pair

      /usr/local/bin/pkcs11-tool --module /usr/local/lib/opensc-pkcs11.so---login --keypairgen --key-type EC:prime256v1 --label test
       
      -

      Generate a self-signed cert

      +

      Generate a self-signed cert

      openssl
       OpenSSL> engine -t -pre SO_PATH:/usr/lib/arm-linux-gnueabihf/openssl-1.0.0/engines/libpkcs11.so -pre ID:pkcs11 -pre LIST_ADD:1 -pre LOAD -pre MODULE_PATH:/usr/local/lib/pkcs11/opensc-pkcs11.so dynamic
      diff --git a/2016-03-27-ec-ca-redux-now-with-more-nitrokey.html b/2016-03-27-ec-ca-redux-now-with-more-nitrokey.html
      index 99d28ae..a80fff9 100644
      --- a/2016-03-27-ec-ca-redux-now-with-more-nitrokey.html
      +++ b/2016-03-27-ec-ca-redux-now-with-more-nitrokey.html
      @@ -13,7 +13,7 @@
       
       

      XXXX is still our placeholder of choice.

      -

      Create directory structure

      +

      Create directory structure

      mkdir ca
       cd ca
      @@ -24,7 +24,7 @@ echo 1000 | tee {root,intermediate}/{serial,crlnumber}
       chmod 700 {client,server}/private
       
      -

      Create openssl.cnf

      +

      Create openssl.cnf

      cat > openssl.cnf <<'END'
       openssl_conf = openssl_init
      @@ -142,12 +142,12 @@ init          = 0
       END
       
      -

      Tell future commands to use your new conf file

      +

      Tell future commands to use your new conf file

      export OPENSSL_CONF=openssl.cnf
       
      -

      Create a root key

      +

      Create a root key

      Insert your root HSM.

      @@ -155,14 +155,14 @@ END # Enter PIN
      -

      Create a self-signed root cert

      +

      Create a self-signed root cert

      openssl req -engine pkcs11 -keyform engine -key label_root -new -extensions ext_root -out root/certs/root.cert.pem -x509 -subj '/C=US/ST=California/O=XXXX/OU=XXXX Certificate Authority/CN=XXXX Root CA' -days 7300
       # Enter PIN
       chmod 444 root/certs/root.cert.pem
       
      -

      Verify root cert

      +

      Verify root cert

      openssl x509 -noout -text -in root/certs/root.cert.pem
       
      @@ -176,14 +176,14 @@ chmod 444 root/certs/root.cert.pem
    9. CA:TRUE
    10. -

      Import root cert onto HSM

      +

      Import root cert onto HSM

      openssl x509 -in root/certs/root.cert.pem -out root/certs/root.cert.der -outform der
       /usr/local/bin/pkcs11-tool --module /usr/local/lib/opensc-pkcs11.so --login --write-object root/certs/root.cert.der --type cert --label root
       # Enter PIN
       
      -

      Create an intermediate key

      +

      Create an intermediate key

      Insert your intermediate HSM

      @@ -191,13 +191,13 @@ chmod 444 root/certs/root.cert.pem # Enter PIN -

      Create an intermediate certificate signing request (CSR)

      +

      Create an intermediate certificate signing request (CSR)

      openssl req -engine pkcs11 -keyform engine -new -key label_intermediate -out intermediate/csr/intermediate.csr.pem  -subj '/C=US/ST=California/O=XXXX/OU=XXXX Certificate Authority/CN=XXXX Intermediate'
       # Enter PIN
       
      -

      Sign intermediate cert with root key

      +

      Sign intermediate cert with root key

      Insert your root HSM

      @@ -206,7 +206,7 @@ chmod 444 root/certs/root.cert.pem chmod 444 intermediate/certs/intermediate.cert.pem -

      Verify intermediate cert

      +

      Verify intermediate cert

      openssl x509 -noout -text -in intermediate/certs/intermediate.cert.pem
       openssl verify -CAfile root/certs/root.cert.pem intermediate/certs/intermediate.cert.pem
      @@ -222,7 +222,7 @@ openssl verify -CAfile root/certs/root.cert.pem intermediate/certs/intermediate.
       
    11. OK
    12. -

      Import root & intermediate certs onto HSM

      +

      Import root & intermediate certs onto HSM

      Insert your intermediate HSM

      @@ -233,19 +233,19 @@ openssl verify -CAfile root/certs/root.cert.pem intermediate/certs/intermediate. # Enter PIN
      -

      Create a chain certificate file

      +

      Create a chain certificate file

      cat intermediate/certs/intermediate.cert.pem root/certs/root.cert.pem > intermediate/certs/chain.cert.pem
       chmod 444 intermediate/certs/chain.cert.pem
       
      -

      CA setup done!

      +

      CA setup done!

      Take your root HSM, if you have a separate one, and lock it in a safe somewhere; you won’t need it for regular use.

      The following steps are examples of how to use your new CA.

      -

      Create a client key

      +

      Create a client key

      You can substitute “server” for “client” for a server cert.

      @@ -254,19 +254,19 @@ chmod 444 intermediate/certs/chain.cert.pem chmod 400 client/private/test1.key.pem -

      Create a client certificate signing request (CSR)

      +

      Create a client certificate signing request (CSR)

      openssl req -new -key client/private/test1.key.pem -out client/csr/test1.csr.pem  -subj '/C=US/ST=California/O=XXXX/OU=XXXX Test/CN=XXXX Test 1'
       
      -

      Sign client cert with intermediate key

      +

      Sign client cert with intermediate key

      openssl ca -engine pkcs11 -keyform engine -extensions ext_client -notext -in client/csr/test1.csr.pem -out client/certs/test1.cert.pem
       # Enter PIN
       chmod 444 client/certs/test1.cert.pem
       
      -

      Verify client cert

      +

      Verify client cert

      openssl x509 -noout -text -in client/certs/test1.cert.pem
       openssl verify -CAfile intermediate/certs/chain.cert.pem client/certs/test1.cert.pem
      @@ -282,7 +282,7 @@ openssl verify -CAfile intermediate/certs/chain.cert.pem client/certs/test1.cert
       
    13. OK
    14. -

      Create a PKCS#12 bundle for the client

      +

      Create a PKCS#12 bundle for the client

      This is an easy(er) way to get all the necessary keys & certs to the client in one package.

      @@ -290,14 +290,14 @@ openssl verify -CAfile intermediate/certs/chain.cert.pem client/certs/test1.cert # Enter both the client key password, and a new password for the export; you'll need to give the latter to the client
      -

      Generate a certificate revocation list (CRL)

      +

      Generate a certificate revocation list (CRL)

      Initially empty. You can also do this for your root CA (with its HSM inserted).

      openssl ca -engine pkcs11 -keyform engine -gencrl -out intermediate/crl/intermediate.crl.pem
       
      -

      Verify certificate revocation list

      +

      Verify certificate revocation list

      openssl crl -in intermediate/crl/intermediate.crl.pem -noout -text
       
      @@ -309,7 +309,7 @@ openssl verify -CAfile intermediate/certs/chain.cert.pem client/certs/test1.cert
    15. Signature algorithm (ecdsa-with-SHA256)
    16. -

      Revoke a certificate

      +

      Revoke a certificate

      Only do this if you need to. Find the certificate:

      diff --git a/2016-04-02-apt-caching-for-debootstrap.html b/2016-04-02-apt-caching-for-debootstrap.html index 51ceb17..29c3d0e 100644 --- a/2016-04-02-apt-caching-for-debootstrap.html +++ b/2016-04-02-apt-caching-for-debootstrap.html @@ -5,18 +5,18 @@

      If you’re building system images, you’re going to do a lot of debootstrap, which is going to fetch a lot of packages. On a fast system, that’ll be the slowest part of the process. Here’s how to cache.

      -

      Install apt-cacher-ng

      +

      Install apt-cacher-ng

      sudo apt-get install squid-deb-proxy
       
      -

      Tell programs to use the proxy

      +

      Tell programs to use the proxy

      export http_proxy=http://127.0.0.1:8000
       # Note that you'll need to re-export this before any use of debootstrap
       
      -

      Tell sudo to pass through http_proxy

      +

      Tell sudo to pass through http_proxy

      sudo visudo
       # Add the line after the env_reset line:
      diff --git a/2016-05-17-wifi-bridging-redux.html b/2016-05-17-wifi-bridging-redux.html
      index d0683e7..d1619e1 100644
      --- a/2016-05-17-wifi-bridging-redux.html
      +++ b/2016-05-17-wifi-bridging-redux.html
      @@ -15,7 +15,7 @@
       
    17. Not assume that the WLAN MAC address is the only MAC at the other end of the link. This assumption is frequently used to reduce the effect of broadcast traffic in a WiFi environment by filtering. There may be settings like “Multicast optimization”, “Broadcast optimization”, or “DHCP optimization” that you need to turn off.
    18. -

      Bridging

      +

      Bridging

      Linux supports bridging. There’s a bridge-utils package in Ubuntu with the tools you need:

      @@ -35,7 +35,7 @@ can't add wlan0 to bridge br0: Operation not supported

      Googling this error produces a wide range of well-meaning yet completely unhelpful results.

      -

      Enable 4 address mode

      +

      Enable 4 address mode

      To be able to add a WiFi interface to a bridge, you have to put it into 4-address mode first:

      @@ -53,7 +53,7 @@ sudo iw dev wlan0 set 4addr on

      You should now be able to fetch an IP on br0 via DHCP. Unless, of course, you need wpa_supplicant to work…

      -

      wpa_supplicant

      +

      wpa_supplicant

      wpa_supplicant needs to be bridge-aware to work with 4-address mode. Fortunately, it has a flag (-b) to set the bridge interface. Unfortunately, this flag is broken in 2.1, the version in Ubuntu Trusty. I verified that it works in wpa_supplicant 2.5 built from source; I haven’t verified 2.4 from Xenial.

      @@ -64,7 +64,7 @@ sudo iw dev wlan0 set 4addr on

      With that working, the interface should get to wpa_state=COMPLETED, and br0 should work normally. Remember that wlan0 will still be unusable directly.

      -

      Ordering

      +

      Ordering

      Bringing up these interfaces is tricky; the ordering is annoying.

      @@ -74,7 +74,7 @@ sudo iw dev wlan0 set 4addr on
    19. wpa_supplicant must be running before you can get an IP address on br0
    20. -

      Putting it together

      +

      Putting it together

      Because of the ordering issues, it’s easier to treat this all as one interface that has to come up together. Here’s an example interface stanza that does this:

      diff --git a/include/bottom.html b/include/bottom.html index f3b1830..38fec81 100644 --- a/include/bottom.html +++ b/include/bottom.html @@ -6,9 +6,7 @@ -
      -

      *

      -
      +
      🔥🐄
      diff --git a/include/style.css b/include/style.css index d44a242..9d50d18 100644 --- a/include/style.css +++ b/include/style.css @@ -34,42 +34,34 @@ img { max-width: 100%; } -header { +header, footer { text-align: center; - margin-bottom: 20px; -} - -footer { - text-align: center; -} - -h1, h2, h3, h4 { + margin-top: 10px; + margin-bottom: 15px; font-size: 17px; -} - -h1 { + font-weight: bold; color: red; } -h1 a { +footer { + filter: grayscale(1.0); +} + +header a { color: red; text-decoration: none; } -h2 { +h1, h2 { + font-size: 17px; + margin-top: 25px; +} + +h1 { font-weight: bold; text-transform: uppercase; - margin-top: 0.5em; - margin-bottom: 0.5em; -} - -h3 { - font-weight: bold; - margin-top: 2em; -} - -h4 { - font-weight: normal; + margin-top: 7px; + margin-bottom: 7px; } article { @@ -126,7 +118,7 @@ code { font-size: 13px; } - h1, h2, h3, h4 { + h1, h2 { font-size: 14px; } diff --git a/include/top.html b/include/top.html index b809fa4..7241313 100644 --- a/include/top.html +++ b/include/top.html @@ -4,7 +4,8 @@ - + + ">