Upgrade Raspberry Pi in place

Upgrading A Raspberry Pi Distribution in Place

Scope: To upgrade the existing Raspberry PI from stretch (9) to Buster (10) without re imaging the SD or losing any data.

  • Update and Upgrade the Raspberry Pi
sudo apt update && sudo apt upgrade -y
  • After that is done, upgrade the Firmware.
sudo rpi-update
  • Proceed with a Y when prompted. If you are already up to date, then move to the next step.
 *** Raspberry Pi firmware updater by Hexxeh, enhanced by AndrewS and Dom
 *** Performing self-update
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 18774  100 18774    0     0  17105      0  0:00:01  0:00:01 --:--:-- 17113
 *** Relaunching after update
 *** Raspberry Pi firmware updater by Hexxeh, enhanced by AndrewS and Dom
 *** We're running for the first time
 *** Backing up files (this will take a few minutes)
 *** Backing up firmware
 *** Backing up modules 4.19.66-v7+
#############################################################
WARNING: This update bumps to rpi-5.15.y linux tree
See: https://forums.raspberrypi.com/viewtopic.php?t=322879

'rpi-update' should only be used if there is a specific
reason to do so - for example, a request by a Raspberry Pi
engineer or if you want to help the testing effort
and are comfortable with restoring if there are regressions.

DO NOT use 'rpi-update' as part of a regular update process.
##############################################################
Would you like to proceed? (y/N)
  • Open the the sources file that the Raspberry PI uses for updating using your preferred editor. I am using nano.
sudo nano /etc/apt/sources.list
  • You should see the following line in the file
deb http://raspbian.raspberrypi.org/raspbian/ stretch main contrib non-free rpi
  • Navigate to that line and change stretch to buster
deb http://raspbian.raspberrypi.org/raspbian/ buster main contrib non-free rpi
  • If you are using nano like myself hit Ctrl+x and save the file.
  • Remove the large changelog file. (sometime this does something and sometimes it does nothing, not a big deal)
sudo apt-get remove apt-listchanges
  • Type Y to proceed
pi@raspberrypi:~ $ sudo apt-get remove apt-listchanges
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages were automatically installed and are no longer required:
  dh-python distro-info-data iso-codes libpython3-stdlib lsb-release
  python-apt-common python3 python3-apt python3-minimal python3.5
  python3.5-minimal
Use 'sudo apt autoremove' to remove them.
The following packages will be REMOVED:
  apt-listchanges
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 377 kB disk space will be freed.
Do you want to continue? [Y/n] 
  • Update and upgrade the system
sudo apt update && sudo apt upgrade
  • Remove packages that are not longer needed
sudo apt autoremove
  • Reboot
sudo apt reboot
  • Upgrade the distribution
sudo apt dist-upgrade
  • Type Y to continue the upgrade
406 upgraded, 110 newly installed, 3 to remove and 0 not upgraded.
Need to get 98.4 MB/258 MB of archives.
After this operation, 307 MB of additional disk space will be used.
Do you want to continue? [Y/n] 
  • Note: There might be a few prompts while the system is updating, so answer them the best you know how with information you have. I know one of the prompts was to ask if I wanted to restart all services without asking… yes, yes I do.
  • When the upgrade is finished, we will auto remove and clean up all the files that are not needed anymore and reboot.
sudo apt autoremove -y
sudo apt autoclean
sudo apt reboot
  • Everything should fire up after the reboot and you can check your Distro with
lsb_release -a
  • Output:
No LSB modules are available.
Distributor ID:	Raspbian
Description:	Raspbian GNU/Linux 10 (buster)
Release:	10
Codename:	buster

!!!Success!!!

Setup of UDM behind EdgeRouter

Setting up a Unifi UDM behind an EdgeRouter using static routes

Scope: The purpose is to use the UDM on the network; behind an EdgeRouter, still be able to expand other networks behind the same EdgeRouter if necessary, while using the same MAC address.

Situation: I could only seem to get an EdgeRouter to work automatically on the network line allotted to me from the IT department, using a specific MAC address that IT used to validate the hardware on their side. I originally gave IT the wrong MAC address and to fix the issue was going to be a 20 page request to the IT department and I did not want to do that again. So I used an EdgeRouter that my friend gave me and through the CLI was able to change the MAC address and surprisingly the EdgeRouter worked immediately on the network… Unlike the USG (Unifi Security Gateway) Pro 4. I tried for days to get the USG to work on the network solo, and then behind the EdgeRouter but no success. That brought me to the conclusion that some configuration I may have done caused the issue but I was not going to tear apart my current configuration to figure it out so I purchased an UDM as an upgrade/fresh start on the new network. Unfortunately I could not get the UDM to work on the network or behind the EdgeRouter so after several attempts I dove into some static routes that I remember doing once before when I had my own router behind an AT&T router. Here is what I did to get it to work behind my current ISP host (IT) at work.

  • Login to the EdgeRouter using your credentials and name ports your accordingly. For the purposes of what I needed to do, the first port in my WAN from IT, the second port is my network that needs to get to the interwebs and port 3 is the one I am using to control all the chaos.

BaseSetup

  • I had to put in the ISP information I was given for port 1 as that is my “ISP” and I have a static IP address from them along with DNS servers.

  • Configure port 2 using the [Actions]>[Config] buttons on the right side to manually configure the name, enable the port, and select Manually define IP address with 10.0.0.1/24 and then save the configuration. This is our “bridge port” from 1 router to another.

BaseSetup

  • For the purpose of this setup, use the 3rd port as the “control network” to configure all this jazz without loosing connection. Configure the port as shown below and save.

BaseSetup

  • Next, hop over to the routing tab and select the “+ Add Static Route button” in the top Left corner and use the variables listed below and save.
    • From there select “Gateway” from the drop down list
    • Destination Network = 192.168.1.0/24
    • Next Hop = 10.0.0.2
    • Distance = 1

BaseSetup

  • I am using 192.168.1.0/24 network behind my UDM and 10.0.0.2 is the WAN IP address of my UDM that is manually configured in the cloudkey. Since all the work here is complete, log into the UDM next and head over to the [Settings]>[Networks] tab.
  • Here I changed the WAN port name to Zone B WAN for my own convenience. Select to edit the WAN using the [edit] button in that row

BaseSetup

  • Set the WAN facing parameters as follows:
    • Connection Type = Static IP
    • IP Address = 10.0.0.2
    • Subnet Mask = 10.0.0.1
    • Router = 10.0.0.1

BaseSetup

  • Save your settings and move over to the “Routing and Firewall” tab. Here you will select “+ Create New Route” and fill in the following parameters then save: * Name = whatever you feel like, mine is outbound * Destination Network = 0.0.0.0/1 * Distance = 1 * Static route type = Next Hop * Next Hop = 10.0.0.1

  • From what I understand, the Destination network is supposed to be 0.0.0.0/0 but the controller kept giving me a payload error so I guessed on something close and it worked. From here I cycled the network connection and the internet worked . The default network is 192.1.68.1.0/24 behind my UDM and thats why I had to add that route to the EdgeRouter so that it would complete the path.

  • In my Instance, the default lans DNS servers had to be set to my IT networks IP addresses they sent me or else the networks would not work.

  • I believe this static route has to be done with each subsequent network formed, eg 192.168.2.1, 192.168.3.1, etc..

Arch CAC Setup

Common Access Card (CAC) Setup on Arch Based Linux Systems

Scope: To enable a CAC reader on an Arch linux based systems. This will give the users access to CAC enabled sites (ie. government webmail, TEDS, etc..) without the use of a windows machine.

  • Install opensc and ccid
sudo pacman -S ccid
sudo pacman -S pcscd 
  • Enable pcsd service
sudo systemctl enable pcscd
  • Start pcsd service
sudo systemctl start pcscd
  • Load the security device

    • Navigate to Firefox Settings -> Privacy and Security -> Scroll down to Certificates -> select Security Devices
    • click “Load” to load a module using /usr/lib/opensc-pkcs11.so or /usr/lib/pkcs11/opensc-pkcs11.so
  • At this point, your browser will function but it is advised to close your browser and reopen it.

  • Additional, but not required steps

    • You are all set. Now you should not see warnings about untrusted sites when viewing government webpages.

UFW Configurations


Uncomplicated FireWall Configurations and Commands

Scope: I had to learn a bit about firewalls when it came to port forwarding for my plex server. Luckily for me, I had a beautiful GUI with my Asus router that made it super easy to use via a website. As I started to get more into the popular distributions of Liunx I realized that most of them either shipped with GUFW or UFW that were not turned on by default. (G)raphical (U)ncomplicated (F)ire(W)all, where UFW is GUFW without the Graphical interface…kinda easy to remember that way. Anyway, the more I tried to do with apps and programs, the more I realized that I had to keep changing these settings in GUFW. The real beef came to play when I built an RPI and had to start enabling ports without using an interface. Like my friend Jon said, “Use the command line as much as possible and Lunix will be a much better experience”.

Getting started by loading UFW if you dont have it. Im using Manjaro so pacman is my “apt”.

sudo pacman -S ufw

You can see the status, which should be disabled by default

sudo ufw status

You should see

[andrewdelorey@andrew-pc ~]$ sudo ufw status
Status: inactive

We want to enable the service and and start it… but if this a remote machine, you want ssh to remain open. I set that rule first

sudo ufw allow ssh

That will give you

[andrewdelorey@andrew-pc ~]$ sudo ufw allow ssh
Rules updated
Rules updated (v6)

To which now you start and enable the service.

sudo systemctl start ufw && sudo systemctl enable ufw

On my system, this didn’t actually start my firewall, I had to type

sudo ufw enable

Then it started with the feedback

[andrewdelorey@andrew-pc ~]$ sudo ufw enable
Firewall is active and enabled on system startup

Type

sudo ufw status

and you should see

[andrewdelorey@andrew-pc ~]$ sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
22                         ALLOW       Anywhere                  
22 (v6)                    ALLOW       Anywhere (v6)             

Now from what I understand, you arent supposed to allow more than what you think is going to be in contact with your machine as well as limiting protocols. Where I am at, I will never use Ipv6 or UDP for my ssh connection. If you are on remote, lets look at our firewall rules listed and determine which ones we do not need. You have to do this before disabling UFW as it will not return values once it is disabled

sudo ufw show numbered

Then lets look at our numbered list of rules

[andrewdelorey@andrew-pc ~]$ sudo ufw status numbered
Status: active

     To                         Action      From
     --                         ------      ----
[ 1] 22                         DENY IN     Anywhere                  
[ 2] 22 (v6)                    DENY IN     Anywhere (v6)             

As you can see, we have 2 rules that are numbered accordingly.

Note: Always delete the highest number first

I accidently found out that when you delete a rule, it renumbers any rule that is higher than the ruled you deleted. For example, if I delete rule 1 first, rule 2 now becomes rule 1. Seems pretty simple until you have 15-20 rules then it gets eight shades of crazy.

Moving on, lets disable the firewall, delete the rules, and start fresh

sudo ufw disable

then

sudo ufw delete 2

You should see

[andrewdelorey@andrew-pc ~]$ sudo ufw delete 2
Deleting:
 allow 22
Proceed with operation (y|n)? 

Type

y

Then hit enter. You should see

Rules updated (v6)

Now we delete rule 1, which is for ipv4.

sudo ufw delete 1

And you should see

Deleting:
 allow 22
Proceed with operation (y|n)? 

Type

y

Then hit enter and there should be no more rules left. Now we can not check since the firewall is disabled and if we enable it, we might lose connection. For my personal use, I only want ssh with ipv4 and tcp allowed through my firewall. Here is what that looks like:

sudo ufw allow ssh/tcp

That allows only tcp access on port 22, but ipv6 is still enabled. Lets start UFW then see which line number is ipv6

sudo ufw enable

Then

sudo ufw status numbered

And you should see this

[andrewdelorey@andrew-pc ~]$ sudo ufw status numbered
Status: active

     To                         Action      From
     --                         ------      ----
[ 1] 22/tcp                     ALLOW IN    Anywhere                  
[ 2] 22/tcp (v6)                ALLOW IN    Anywhere (v6)     

As you can see, the table now has (TCP) as where it did not before. Also ipv6 is there but we know we can delete it or deny. Thats up to you. Here I am going to delete it.

sudo ufw delete 2

Type y, then enter and let’s do another check

sudo ufw status numbered

output

[andrewdelorey@andrew-pc ~]$ sudo ufw status numbered
Status: active

     To                         Action      From
     --                         ------      ----
[ 1] 22/tcp                     ALLOW IN    Anywhere                  

Perfect! The Machine is now limited to port 22, tcp, and ipv4 access. You can also limit the ip address access too but that will be added later.

Virtual Here USB server Backend Upgrade

Purpose


It occured to me one night that I needed to upgrade the server side of the “Virtual Here” usb server running on a Raspberry Pi. There had been several client side updates and one day I thought…“I wonder if I can upgrade the backend”. So here we are..

Instructions


  1. Check the version of the service by using the the [Properties], [info] drop down on the client and write down the version

  2. Navigate to https://virtualhere.com

  3. Go to the USB Servers at the top of the page and select Linux USB Server which should lead to this page https://virtualhere.com/usb_server_software

  4. scroll to the bottom and download Virtual USB Server for Linux (ARM) Be sure not to use the 64 bit version.

  5. Assuming that you are using a Linux machine or terminal, use the secure copy command to move the file, ie.

scp vhusbdarm pi@pi-j15.shopnet.com:/home/pi/
  1. log into the Raspberry Pi via ssh and change the file type to executable.
chmod +x vhusbdarm
  1. Move the file to the /usr/bin location
sudo mv ./vhusbdarm /usr/bin/
  1. reload services
sudo systemctl daemon-reload
  1. restart services
sudo systemctl restart virtualhere
  1. Give the client a few minutes to requery the host and check to see if the version changed from the number aquired in step one.

  2. Restart is not necessary, but won’t hurt if you want to

Upgrading Hugo site and Nginx to a secure configuration

Purpose

Use https with nginx and hugo instead of http.

Much like anything else, encryption on the interwebs is a growing requirement that helps some and irritates others. For me, my hugo site would not show up correctly on firefox as it kept defaulting my webpage to https instead of http. Super irritating for troubleshooting the site as I build it and try to figure out the other configurations to make it work right. So here was my step by step on upgrading my nginx configuration.

  • First off, install certbot on Debain 10
sudo apt install certbot
  • Stop nginx and start the certbot procedure
sudo systemctl stop nginx
sudo certbot certonly
  • It will ask you to agree to the terms of service and also about your email, that stuff is up to the user. I agreed to the terms but did not share my email.
  • Eventually you will see the following:
How would you like to authenticate with the ACME CA?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: Spin up a temporary webserver (standalone)
2: Place files in webroot directory (webroot)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 
  • Select 1 and then put in the domain name that you have choosen. I used andrew.deloco.us
Plugins selected: Authenticator standalone, Installer None
Please enter in your domain name(s) (comma and/or space separated)  (Enter 'c'
to cancel):  andrew.deloco.us
  • If successful you should see this:
Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/andrew.deloco.us/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/andrew.deloco.us/privkey.pem
   Your cert will expire on 2021-05-08. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot
   again. To non-interactively renew *all* of your certificates, run
   "certbot renew"
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le
  • Copy the .pem location as we will need this for the nginx configuration
  • open your nginx config. Mine is named hugo
sudo nano -l /etc/nginx/sites-available/hugo
  • The non-secure config should look like this:
        #Listen for ipv4 on port 80 
server {
       listen 80;

        #Listen for web address
        server_name andrew.deloco.us;
        #HTML file location
        root /var/www/AndrewResume/public/;
        #No Idea why this is needed
        index index.html;
}
  • Change it to look like this using the .pem directories from above:
        #Listen for ipv4 on port 80 
server {
       listen 80;

        #Listen for web address
        server_name andrew.deloco.us;

        #Redireect from Port 80 to 443
        location / {
        return 301 https://$host$request_uri;
    }
}
        #Listen on port 443
server {
        listen 443 ssl http2;

        #Logging
        access_log /var/log/nginx/andrew.deloco.us.access.log;
        error_log /var/log/nginx/andrew.deloco.us.error.log;

        #Site Certificate Locations
        ssl_certificate /etc/letsencrypt/live/andrew.deloco.us/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/andrew.deloco.us/privkey.pem;
        ssl_trusted_certificate /etc/letsencrypt/live/andrew.deloco.us/chain.pem;

        #Web address that is now listening on 443
        server_name andrew.deloco.us;
        #HTML file location
        root /var/www/AndrewResume/public/;
        #No Idea why this is needed
        index index.html;
}
  • The configuration will listen for both secure and non-secure requests and redirect the non-secure to secure. Legit.
  • Restart Nginx
sudo systemctl start nginx
  • Now we have to change some configurations in the config.toml to match the secure site path or you will get errors on the page.
  • Change directory and open the config file.
cd /var/www/AndrewResume/
nano -l config.toml
  • As you can see, the file BaseURL states an http:// that we need to change to https://
BaseURL = "http://andrew.deloco.us/"
languageCode = "en"
title = "Andrew DeLorey"
theme = "hugo-initio"
publishDir = "public"
  • Change to
BaseURL = "https://andrew.deloco.us/"
languageCode = "en"
title = "Andrew DeLorey"
theme = "hugo-initio"
publishDir = "public"
  • reload your hugo files while you are in that directory
hugo

The site now is secure and works and intended

Creating a static website with Hugo and Nginx

Hugo static site with Nginx on Debian 10

Purpose


Alright, so I was asked to make a resume and keep it current but the problem I was having is that I have 4 or 5 resumes that are stored in different formats as well as going back several years. So I decided to make a master resume that not only appeases the masses, but also keeps track of some of the cool things I have tried to do, the things I have learned, and examples to prove it… all the while using markdown files that do not seem to age. Special thanks to Jon Polom for teaching me the basics on markdown. I contacted another guru of the interwebs, Andrew Dunn, about how to make a site using primarily markdown and his response verbatim, “Hugo”. So here we are.

Setup


  • First things first, I need somehwere to host this site from
    • I originally wanted to host this on a Dell D30 that I am working with but using zfs as a file system kind of scared me off due to permissions and the elaborate stuff that goes with it
    • I’ll be honest, my Manjaro PC was my initial build and setup. The setup was so fast smooth and simple that I really did not want to stray away from it but I also did not want to use my daily driver as work horse because I am selfish that way
    • Winner: Digital Ocean droplet it was. I was kind of bummed that they didn’t have an arch or manjaro image already available so Debian 10 was the next best thing…probably for the best

Install Brew

  • With Debian up and running we need to install hugo but it is not the same as Manjaro by any means. Manjaro was as easy as sudo pacman -S hugo and done. Debian requires brew, which I have never worked with before so that was kind of strange to start with.
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
  • and you will see this:
==> This script will install:
/home/linuxbrew/.linuxbrew/bin/brew
/home/linuxbrew/.linuxbrew/share/doc/homebrew
/home/linuxbrew/.linuxbrew/share/man/man1/brew.1
/home/linuxbrew/.linuxbrew/share/zsh/site-functions/_brew
/home/linuxbrew/.linuxbrew/etc/bash_completion.d/brew
/home/linuxbrew/.linuxbrew/Homebrew
==> The following new directories will be created:
/home/linuxbrew/.linuxbrew/var
/home/linuxbrew/.linuxbrew/share/zsh
/home/linuxbrew/.linuxbrew/share/zsh/site-functions
/home/linuxbrew/.linuxbrew/var/homebrew
/home/linuxbrew/.linuxbrew/var/homebrew/linked
/home/linuxbrew/.linuxbrew/Cellar
/home/linuxbrew/.linuxbrew/Caskroom
/home/linuxbrew/.linuxbrew/Frameworks

Press RETURN to continue or any other key to abort
  • press Enter, then type in your sudo password *. it takes a bit to finish doing it’s thing but the very end yields the following
==> Next steps:
- Add Homebrew to your PATH in /home/andrewdelorey/.profile:
    echo 'eval $(/home/linuxbrew/.linuxbrew/bin/brew shellenv)' >> /home/andrewdelorey/.profile
    eval $(/home/linuxbrew/.linuxbrew/bin/brew shellenv)
- Run `brew help` to get started
- Further documentation: 
    https://docs.brew.sh
- Install the Homebrew dependencies if you have sudo access:
    sudo apt-get install build-essential
    See https://docs.brew.sh/linux for more information
- We recommend that you install GCC:
    brew install gcc
  • To which we follow blindly and do as the text says:
echo 'eval $(/home/linuxbrew/.linuxbrew/bin/brew shellenv)' >> /home/andrewdelorey/.profile
sudo apt-get install build-essential
brew install gcc

Install Hugo

  • Lets install Hugo

    • brew install hugo
    • yup…that was it.
  • Now we need to create the site but , apparently location is everything. According to the online peeps, creating a site in your home directory is not a good idea and you run into permission problems. Not sure if that is true, but I had a lot of issues on my Debian install when I created the site in my home folder.

  • so change the directory you are in to

cd /var/www/
  • then create your site. Mine is called AndrewResume.
hugo new site AndrewResume
  • This will create a folder with a bunch of stuff in it that is needed to run the site. The site won’t run right yet because it requires a theme to organize the data in a structure.

    • Change your directory to the themes folder and go to https://themes.gohugo.io/ and select a theme that appeals to you.
    • Follow the instructions provided by the theme author to copy the information to your theme folder.
    • I choose the Hugo Initio theme and went to the git repository to copy the exampleSite folder. This was what I based my entire structure on since most of the attributes I wanted were already there. I copied the configs and folders to thier respected locations expecting to see exactly what i saw in the demo
    • While in the AndrewResume folder, start the server ** Note**, You have to be in the correct directory or hugo will not work:
hugo server -D
  • Go to localhost:1313 in your browser
localhost:1313
  • The site should look the same as the demo. Now since I originally did this on a Manjaro pc at my house, it was really easy to modify and see the changes on the fly. The Digital Ocean droplet was not so easy since there was no way for me to see the localhost on a remote machine in the cloud. This is where things get a little sticky and I ran into a bunch of issues because of my ignorance of how things work.

Configure Firewall

I use ufw as my firewall so I am going to go over the install and configuration just to be safe:

  • Install ufw
sudo apt install ufw

Since my setup is remote, I will do my configuration before starting the service.

  • Allow ssh:
sudo ufw allow ssh
  • Allow Nginx
sudo ufw allow 'Nginx Full'
  • Enable firewall
sudo enable ufw

Install Nginx

  • Install nginx
sudo apt install nginx
  • Copy the default nginx file while renaming it to your site. I named mine hugo
cd /etc/nginx/sites-available/
cp default hugo
nano hugo
  • The sile should look like this:
server {
       listen 80;
       listen [::]:80;

       server_name mysite.com www.mysite.com;

       root /home/username/mysite/public/; #Absolute path to where your hugo site is
       index index.html; # Hugo generates HTML

       location / {
               try_files $uri $uri/ =404;
       }
}
  • Because I originally did not look up a tutorial on how to configure nginx, I originally tried to use the proxy_pass parameter with localhost:1313 but some attributes did not work so well. I was having troubles with the source path or baseURL on the webpage stating localhost:1313 instead of the actual website andrew.deloco.us. The site would load but none of the images were coming through, the nginx hugo config looked something like this:

Bad Nginx config

server {
       listen 80;
       listen [::]:80;

        server_name andrew.deloco.us;
        
        location / {
            
            proxy_pass http://localhost:1313;
            
       }
}
  • Which looking at the souce code of the web page, in google, it produced the issue below on line 4

Bad source path example

1     <header id="header">
2  <div id="head" class="parallax" data-parallax-speed="2" style="background-3image:url('http://localhost:1313/images/MainBackDrop2.jpg');">
3    <h1 id="logo" class="text-center">
4      <img class='img-circle' src="http:localhost:1313/images/AndrewCenterPhoto.jpg" alt=""> ##localhost:1313 should be andrew.deloco.us 
5      <span class="title">Andrew DeLorey</span>
6     <span class="tagline">Machinist By Trade<br>
7        <a href="mailto:andrew.delorey@deloco.us">andrew.delorey@deloco.us</a>
8      </span>
9  </h1>
</div> 
  • Luckily a smart friend sent me their elaborate configuration which only made things worse before it got better. Apparently proxy_pass was not the solution, the site needed a path to the hugo public folder, (which i will explain in a bit) and index index.html below the stated path. I accidently put it above the root path and it did not like that.

Working config example:

server {
       listen 80;
       listen [::]:80;

	   server_name andrew.deloco.us;
       root /var/www/AndrewResume/public;
	   index index.html

}
  • Remove the default file and enable the config by creating a symbolic link to the sites-available folder
sudo rm /etc/nginx/sites-enabled/default
sudo ln -s /etc/nginx/sites-available/hugo /etc/nginx/sites-enabled/hugo
  • Enable and start Nginx
sudo systemctl enable nginx
sudo systemctl start nginx
  • Now this got me in the ballpark but with all the other issues I was working with, I still wasn’t out of the woods. I was under the impression that if you type hugo server -D in the AndrewResume folder, that life was all peachy and things would work right… not for this theme. The configuration file had to be changed to match some of the parameters of the site, AND apparently this version needs you to create a public folder and direct hugo to it. So.. we change it:

  • in the AndrewResume folder type

nano config.toml

The top of the configuration file looks like this

1 BaseURL = "localhost:1313/"
2 languageCode = "en"
3 title = "Andrew DeLorey"
4 theme = "hugo-initio"
5 publishDir = ""
  • Note: I added the line numbers, (ex: 1,2,3,4, etc) for ease of explaining what I did wrong, these numbers do not exist in the actual config file

  • I had to change the configuration to look like this, where line one was changed to “http://andrew.deloco.us/" and line 5 was changed to “public”

1 BaseURL = "http://andrew.deloco.us/"
2 languageCode = "en"
3 title = "Andrew DeLorey"
4 theme = "hugo-initio"
5 publishDir = "public"
  • Next up, I created the public folder, as defined above, in the AndrewResume folder:
mkdir public
  • then created the files by typing:
hugo
  • The software automatically created the webpages and dropped them in the public folder that nginx is pointed to.

Then BAM, the website andrew.deloco.us worked and I congradulated myself, yelled a bit about how great I was then pulled back the excitement and started adding content.

Linux Commands Cheat Sheet

Secure Copy

Syntax Structure

scp [-346ABCpqrTv] [-c cipher] [-F ssh_config] [-i identity_file] [-J destination] [-l limit] [-o ssh_option] [-P port] [-S program] source ... target

Copy file from a remote host to local host SCP example:

scp username@from_host:file.txt /local/directory/

Copy file from local host to a remote host SCP example:

scp file.txt username@to_host:/remote/directory/

Copy directory from a remote host to local host SCP example:

scp -r username@from_host:/remote/directory/  /local/directory/

Copy directory from local host to a remote host SCP example:

scp -r /local/directory/ username@to_host:/remote/directory/

Copy file from remote host to remote host SCP example:

scp username@from_host:/remote/directory/file.txt username@to_host:/remote/directory/

Find

  • Using find as sudo yields different results

Syntax Structure

 find  [-H]  [-L] [-P] [-D debugopts] [-Olevel] [starting-point...] [expression]

Find file name host.conf, starting in root folder.

sudo find / -name host.conf

Find file name host.conf,starting in root folder, not case sensitive.

sudo find / -iname host.conf

Find file name host.conf, starting in root,not case sensitive using wildcard at the end

sudo find / -iname "host*"

ThingSpeak Temperature Sampling


Raspberry Pi Temperature Sampling using thinkspeak and a SHT31 Sensor

Scope: I work in a shop environment that gets to be pretty unbearable in the summertime. Hot machines, above 85 degree weather and Michigan humidity equals dehydration and heat stroke. There is a Union contract here that dictates and temp:humidity ratio on when we should be sent home, but it has only hit that twice since 2008 and no one was really monitoring those variables. I talked to our safety department and theie hands were kind of tied due to funding, the only repreive they could offer was putting a calibrated sensor on the shop floor nad having someone come down and read it everyday. That seemed like a great solution except the waterjet area was more humid than the machine shop, sheetmetal was cooler than the rest of the shop, and welding was the hottest area in the summer, for obvious reasons. Additionally, the temperature would spike at different times of the day. So the one sensor check per day, at the same time everyday, scenario did not work as well as intended when the temperature or humidity spiked after 1400 hours and the sample was already taken or the sensor was in a cooler part of the shop. After having a conversation, about our downfalls, with safety, we agreed that thier process was about as good as it was going to get with the resources available. I never took into consideration that there were other cells with the same issues and limited personel to check the readings.

Requirements: Obviously I have to automate the sampling procedure, it needs to sample as often as possible, there needs to be more than one location, and remotely viewable.

Solution: We decided on a Raspberry Pi 3b with a SHT31 sensor as we could use the Raspberry Pi for other purposes and the sensor was the most accurate for the cost out of pocket.

  • Below is the python code used to get information from the sensor and send it to ThingSpeak.com. The object was to get a sample every minute for 5 minutes, then add them up and divide by 5 for an average, then send the average to ThingSpeak. We did this for a few reasons;
  1. We are limited to the number of samples per hour we can send up on the free version.
  2. The bay door opens too often and would give extremly dynamic readings that caused the graph to show massive peaks and valleys. This left the more important data hard to visualize as the graph was now too small to see those numbers.
  3. I wanted to learn something a bit more complex.

So with a some help from the interwebs and a buddy I work with, we came up with this script.

#!/usr/bin/python
"""
MJRoBot Lab Temp Humidity Light RPi Station

Temperature/Humidity/Light monitor using Raspberry Pi, DHT11, and photosensor 
Data is displayed at thingspeak.com
2016/03/03
MJRoBot.org

Based on project by Mahesh Venkitachalam at electronut.in and SolderingSunday at Instructables.com

"""

# Import all the libraries we need to run
import sys
import RPi.GPIO as GPIO
import os
import time
from time import sleep
from Adafruit_SHT31 import *
import urllib2


#Setup our API and delay. 
myAPI = "XXXXXXXXXXXX" # Put your Thingspeak API here, leave the quotes.
myDelay = 60 #how many seconds between posting data

def GetSensorData():
    sensor = SHT31(address = 0x44)
    degrees = ((sensor.read_temperature()*1.8)+32)
    humidity = sensor.read_humidity()

    return degrees, humidity

def dataAverage():
    MeanDataTemp = []
    MeanDataHumid = []
    counter=0
    AveTemp=0
    
    while counter <= 4:
        GetSensorData()
        degrees, humidity = GetSensorData()
        #uncomment below for loop debugging in terminal
        #print degrees, humidity, counter
        MeanDataTemp.append(degrees)
        MeanDataHumid.append(humidity)
        sleep(int(myDelay))
        counter = counter + 1

    AveTemp = (MeanDataTemp[0] + MeanDataTemp[1] + MeanDataTemp[2] + MeanDataTemp[3] + MeanDataTemp[4])/5
    AveHumid = (MeanDataHumid[0] + MeanDataHumid[1] + MeanDataHumid[2] + MeanDataHumid[3] + MeanDataHumid[4])/5
    #Uncomment to see variables in terminal
    #print AveTemp, AveHumid
    return (round(AveTemp,3), round(AveHumid,2))

# main() function
def main():
    
    print 'starting...'
    
    baseURL = 'https://api.thingspeak.com/update?api_key=%s' % myAPI
    print baseURL
    
    while True:
        try:
            AveTemp, AveHumid = dataAverage()
            f = urllib2.urlopen(baseURL + 
                                "&field1=%s&field2=%s" % (AveTemp, AveHumid)) #change these fields to accomodate your setup.
                              
            print f.read()
            
            print  '{0:0.3F} deg F'.format(AveTemp) + " " + '{0:0.2f} %'.format(AveHumid)
            f.close()
            
        except Exception ,e:
	    print e
            print 'Network Failure Retrying.'
	   
            time.sleep(60)
            main()

# call main"""
if __name__ == '__main__':
    main()

After we managed to get the python code to run exactly how we needed it to, we had to figure out how to get it to run on boot without us logging into the RPI, via ssh, and starting the code manually. Upon some research, the rc.local file seemed to be the legitimate way to start the code. I found an example online, copied it then added

/home/pi/CollectAndSendToThinkSpeak.py

after the code that was already there. Then I changed the rc.local file to be executable

chmod +x rc.local

and rebooted the RPI.

  • Here is the rc.local file
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.

# Print the IP address
_IP=$(hostname -I) || true
if [ "$_IP" ]; then
  printf "My IP address is %s\n" "$_IP"
fi
/home/pi/CollectAndSendToThinkSpeak.py
exit 0

We ended up running the python code in 2 areas, one in the machine shop and the other down in waterjet. Unfortunatly at that time, the other supervisor thought we were trying to do some hacking and/or illegal stuff and did not want sampling in the areas of welding and assembly.

Here is the website if you want to see it yourself: ThingSpeak.