When I make something for the web, I have plenty of “static assets”: CSS files, JS files, and images. In my HTML files, I reference their sources as you might expect:
<link rel="stylesheet" href="./styles.css" />
<script async src="./scripts.js"></script>
<img src="./cool-image.avif" />
But when a browser downloads these, it will note the source URLs and cache the
static assets, typically for a time set by the server via headers like
Expires: <some date>
or Cache-Control: public, max-age=15552000
(6 months).
This is exactly what we want the browser to do, but what happens if I change the
contents of the file?
Nothing! The last-cached content is served!
The browser caching is doing its job to help the user by not re-downloading the same content, but if we have updated static asset content, we need to give the browser a way to know that the content is different. Then it should go download and use that updated asset.
One option, if I’m using a CDN, is to manually purge the assets. There are also
some additional headers, like ETag
and Last-Modified
, that can help hint to
a browser that it can keep its cache or not for an asset, but if you don’t want
to mess with request headers and/or want to guarantee the browser gets the
latest version of an asset, you can provide a different file name for each
version of an asset. Since the URL to the resource is different, the browser
should always go and try to download it.
Yes, frameworks like Ruby on Rails and Phoenix will automatically do this sort of thing for you, but we’re exploring here and trying to keep things simple! (That will remain to be seen… 😅)
Let’s talk about how we can DIY (do it yourself).
What we want to do is take a file like styles.css
, get an MD5 fingerprint
(checksum/digest/hash) based on the file’s
contents, and output something like styles.78f7f2c2d416e59525938565dd6dd565.css
.
This way, if anything in our file changes, we’ll get a new hash and therefore
a new file name.
Given we have these files:
index.html
cool-image.avif
scripts.js
styles.css
and our index.html
file contains this:
<link rel="stylesheet" href="./styles.css" />
<script async src="./scripts.js"></script>
<img alt="" src="./cool-image.avif" />
then we should create a dist/
directory with files that resemble these:
index.html
cool-image.dadb0e162005e9b241a13ca5f871e250.avif
scripts.9efef7ad3d06e7703c7563dbc1ed78a9.js
styles.78f7f2c2d416e59525938565dd6dd565.css
and our index.html
file should have its assets’ paths updated to resemble
these:
<link rel="stylesheet" href="./styles.78f7f2c2d416e59525938565dd6dd565.css" />
<script async src="./scripts.9efef7ad3d06e7703c7563dbc1ed78a9.js"></script>
<img src="./cool-image.dadb0e162005e9b241a13ca5f871e250.avif" />
If you haven’t used md5sum
before, go ahead and run man md5sum
in your
terminal. There are some neat things you can use this for, like storing a list
of file checksums in a file, then detecting which files changed, having your
build system make decisions based on that, and avoiding costly project rebuilds
by only rebuilding files or directories and their dependencies that changed. But
we only need the top-level, most basic thing from md5sum
: computing an MD5
message digest.
Let’s say this is what our project folder looks like:
λ tree -a -L 1
.
├── .git
├── .gitignore
├── cool-image.avif
├── dist
├── index.html
├── scripts.js
└── styles.css
If I want to get an MD5 content hash for styles.css
, I pass the filename to
md5sm
:
λ md5sum styles.css
e6dd05b39c5fb97218130638c0a374de styles.css
Sweet! If we want to query by a bunch of different file extensions, md5sum
can
handle that:
λ md5sum *.{avif,css,js}
dadb0e162005e9b241a13ca5f871e250 cool-image.avif
e6dd05b39c5fb97218130638c0a374de styles.css
78f7f2c2d416e59525938565dd6dd565 bingo.js
But if we want md5sum
to ignore certain directories, find a bunch of different
file types, and maybe do so a bit more efficiently, we can lean on the find
tool. Run man find
if you’re unfamiliar with it or can’t remember its syntax!
Let’s run it with some options and then break down what we did:
λ find . \
-type f \
! -path "./.git/*" \
! -path "./dist/*" \
\( -iname "*.css" -o \
-iname "*.js" -o \
-iname "*.avif" -o \
-iname "*.bmp" -o \
-iname "*.gif" -o \
-iname "*.heif" -o \
-iname "*.jpeg" -o \
-iname "*.jpg" -o \
-iname "*.png" -o \
-iname "*.svg" -o \
-iname "*.webp" \
\) \
-exec md5sum '{}' +
e6dd05b39c5fb97218130638c0a374de ./styles.css
dadb0e162005e9b241a13ca5f871e250 ./cool-image.avif
78f7f2c2d416e59525938565dd6dd565 ./bingo.js
Above, we told the find command to find all files in this directory, excluding
the .git/
and dist/
directories, where the file extension ends in one of a
handful of extensions of likely static assets, and then we tell it to execute
md5sum
on each one. At the bottom, we see the results!
Next, we want to take that MD5 hash on the left and output a new file where the
filename has the hash just before the extension. For that, we’re going to want
to start putting this into a build
script.
In your terminal, run the following commands to create a build file with some scaffolding, then change it to an executable file (don’t copy the λ):
λ cat <<EOF > ./build
#!/usr/bin/env bash
set -o errexit
set -o errtrace
set -o nounset
set -eou pipefail
function main {
}
main
EOF
λ chmod +x ./build
Once you’ve done that open the file, and let’s add our find function in there:
# ...
BUILD_DIR="./dist"
function get_asset_md5sums {
find . \
-type f \
! -path "./.git/*" \
! -path "${BUILD_DIR}/*" \
\( -iname "*.css" -o \
-iname "*.js" -o \
-iname "*.avif" -o \
-iname "*.bmp" -o \
-iname "*.gif" -o \
-iname "*.heif" -o \
-iname "*.jpeg" -o \
-iname "*.jpg" -o \
-iname "*.png" -o \
-iname "*.svg" -o \
-iname "*.webp" \
\) \
-exec md5sum '{}' +
}
function main {
get_asset_md5sums
}
If you then run that file via ./build
, you’ll get back the same results as
before.
Update your main
function with the following. We’ll use code comments to
explain most of this part:
function main {
# Recreate build dir
rm -rf "${BUILD_DIR}" && mkdir -p "${BUILD_DIR}"
# Create a bash array for holding # "file=file_with_sum"
# pairs for use later. Yes, I know bash 4 has associative arrays.
# E.g.: "styles.css=styles.78f7f2c2d416e59525938565dd6dd565.css"
assets_array=()
# Get all asset MD5 checksums, put them into an assets
# array for later use, and write each file to a new
# file with the checksum in the name.
while read -r sum file; do
file_name="${file%.*}" # Extract the file's name
file_ext="${file##*.}" # Extract the file's extension
file_with_sum="${file_name}.${sum}.${file_ext}" # Hashed file name
# Append to the assets array
assets_array+=( "${file}=${file_with_sum}" )
# Write the file's contents to the build directory
# at the new, hashed file name.
cat "${file}" > "${BUILD_DIR}/${file_with_sum}"
done < <(get_asset_md5sums)
}
If you’re wondering about the <(get_asset_md5sums)
part, it uses
process substitution
to let us have access to the assets_array
variable, which we wouldn’t have
access to if we piped get_asset_md5sums
to while read ...
, for the while
loop would be ran in a subshell environment.
Instead, with process substitution, the result of that function is stored in a
named pipe/special temporary file (in /dev/fd/
on my system), the file name is
passed, and then its contents are read and attached to the standard input by the
<
input file descriptor. To sum this aside up, if we did get_asset_md5sums | while read...
,
we’d get an assets_array[@]: unbound variable
error, so we’re using process
substitution to get around that.
If you run ./build
again, you won’t see any terminal output, but you will see
a shiny new ./dist
folder with your files in it!
The next part is a little more involved, for we need to create new HTML files that have updated values for asset source locations.
Before we can copy, update, and output our HTML files (of which we only have one
in this example), we first need a way to find them! Add this below your
get_asset_md5sums
function:
function get_html_files {
find . \
-type f \
! -path "./.git/*" \
! -path "${BUILD_DIR}/*" \
-iname "*.html"
}
Next, at the bottom of your main
function, add this code, and we’ll use
comments to try to explain each piece in context:
# For each HTML file...
while read -r file; do
# For each line in the current HTML file...
while IFS='' read -r line; do
line_updated="${line}"
# For each "file=file_with_sum" pairing...
for val in "${assets_array[@]}"; do
file_name_original=$(echo "${val}" | cut -d "=" -f 1)
file_name_summed=$(echo "${val}" | cut -d "=" -f 2)
# If the current line has the original file name...
if [[ "${line}" =~ ${file_name_original} ]]; then
# ...then replace that file name with the hashed one
line_updated=$(echo "${line}" | sed -E 's@'"${file_name_original}"'@'"${file_name_summed}"'@g')
break
fi
done
# Print the line
echo "${line_updated}"
# Pass the file in, then once done, redirect the
# printed file lines to a new file in our build dir
done < "${file}" > "${BUILD_DIR}/${file}"
# Pass the HTML files in via process substitution
done < <(get_html_files)
Once this code runs, your HTML files should be copied over to your dist/
directory, but the lines referencing your static assets should all be updated!
This runs fast enough for my purposes, but if you have any performance tips or explanation corrections, please email me.
Once you’ve got this building, you could build it locally and push the dist/
folder up to your source control, but I want ./build
to run automatically, and
since I primarily use GitHub, I just want to deploy the dist/
directory to
GitHub Pages.
To make this a reality, we can use Github Actions to deploy to GitHub Pages.
Create a .github/workflows/main.yml
(the YAML file name can be whatever you
like) and add the following:
name: CI
on:
pull_request:
push:
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- name: Checkout repo under GH workspace
uses: actions/checkout@v4
- name: Run build script
run: ./build
- name: Deploy to gh-pages
uses: peaceiris/actions-gh-pages@v3
if: ${{ github.ref == 'refs/heads/main' }}
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./dist
After you’ve committed and merged everything into your main
branch, go to your
project’s “Settings” page, then click on “Pages” on the left, and set your
“Branch” to point to gh-pages
and / (root)
. The page should tell you that
your site is live and give you a link and button to visit the site.
Here is a silly project where everything in this blog post was implemented: https://github.com/rpearce/gom-jabbar-bingo.
So… did we over-engineer our website? Maybe? But we’re also avoiding static asset caching issues by automating away a guaranteed way of cache-busting our static assets, so that’s something!
If you’d like to see more bash content or something else entirely, send me an email!
Thanks for reading!
— Robert
This post is sponsored by Flavio Corpa (kutyel), and if you’re curious about Elm, Haskell, or functional JS, check out his work! If you’d like to sponsor me writing about a topic, check out my GitHub Sponsors page.
Static websites are typically .html
-suffixed files that use only HTML,
CSS, and a little JS, and they have a number of advantages: they’re
usually cheap to maintain, easy to deploy, easy to cache via CDNs, and have
fewer security risks than dynamic web applications since they don’t have backend
application servers. Static sites fit perfectly in the old web style of blogs,
information sites, small business sites, personal photo galleries, etc. However,
when it comes to setting and storing readers’ preferences—a common expectation
these days—static sites are at a disadvantage, for they have no backend app
server with which to communicate to figure this out and render the page just
right for a viewer.
For me (and the purposes of this post), static sites do not refer to single-page apps (SPAs) nor server-side rendered web apps.
Apart from this website, which demonstrates most everything we’ll cover, I made a smaller example site here. Go ahead and open it up, make some selections, and refresh the page a few times, if you like. We will reference this example throughout the rest of this post, so keep the page open. The only bummer is that we’ll not be covering the font changing in this post, but its code is available in the example and follows the same patterns we’ll do here.
Here are some screenshots:
Let’s sketch out some markup we’ll need to accomplish our goal:
<!DOCTYPE html>
<html lang="en">
<head>
<!-- The usual title, meta, and link elements go here. -->
</head>
<body>
<script>
/*
* This is where we'll put some blocking JS
* that does a couple of things before the
* page is painted. Trust me!
*/
</script>
<main>
<h1>Static site theming example</h1>
<!--
This is where our <select> element
for changing the theme will go.
-->
</main>
<script async>
/*
* This is where we'll add some event listeners
* to handle changes to our theming options. I
* did this part using the body of a script element,
* but it could reference a script file using the
* `src` attribute, instead.
*/
</script>
</body>
</html>
After our <h1>
element, let’s add a form select input that we’ll provide the
reader for controlling their theme.
<form>
<label for="select-theme">Theme</label>
<select data-select-theme="" id="select-theme">
<option value="moon">That's No Moon</option>
<option value="forest">Forest</option>
<option value="ocean">Ocean</option>
</select>
</form>
Let’s then add some JS to the <script async>
element at the bottom to listen
for changes to our <select>
:
(() => {
const themeEl = document.querySelector('[data-select-theme]');
if (themeEl) {
themeEl.addEventListener('change', e => {
console.log(e.target.value)
// We need to do something here!
});
}
})();
If we were using a backend server, we’d submit the readers’ selection to the backend to store in a database or cookie, but since we don’t have a backend, we need to use localStorage.
Where we have that console.log
and code comment above, it would be nice if we
could call a function or method that would set the theme for us and do all that
work. Let’s do that!
themeEl.addEventListener('change', e => {
window.site.setTheme(e.target.value);
});
That feels better, but we don’t have a site
object and therefore no setTheme
method on it. Since we’ll probably want site
to be available when the page is
loading, we’re going to put this initialization code in the first <script>
element:
(() => {
window.site = {
setTheme: (name) => {
// Our theme-setting code will go here
},
};
});
Before we add our theme-setting code, we need to talk about what we want it to do:
data-
attribute on the body
? Why not?localStorage
window.site = {
setTheme: (name) => {
document.body.setAttribute('data-theme', name);
localStorage.setItem('prefTheme', name);
},
};
This means that in our HTML file above, we should go back and add a default
data-theme
on <body>
for when the page loads the first time:
<body data-theme="moon">
Then, when a reader makes a change to the theme, it’ll update that value to
whatever the value was set to in the <select>
.
Feel free to poke around the example’s CSS to see this in full, but here’s the gist.
For our CSS, all we need to do is have our styles use CSS variables for the things that can change, and then those CSS variables are defined like this:
body, /* Because moon is the default here...this is just in case */
body[data-theme="moon"] {
--alpha-link-visited: 0.85;
--color-bg-body: 21, 21, 21;
--color-bg-select: 21, 21, 21;
--color-border-select: 206, 206, 206;
--color-link: 246, 241, 213;
--color-sponsor-hearts: 206, 206, 206,
--color-text: 206, 206, 206;
--icon-select: var(--icon-select-moon);
--link-underline-offset: 0.2rem;
--link-underline-thickness: max(0.1rem, 1px);
--link-weight: bold;
}
body[data-theme="forest"] {
--alpha-link-visited: 0.9;
--color-bg-body: 57, 76, 66;
--color-bg-select: 57, 76, 66;
--color-border-select: 255, 255, 255;
--color-link: 238, 213, 174;
--color-sponsor-hearts: 153, 117, 90;
--color-text: 255, 255, 255;
--icon-select: var(--icon-select-forest);
--link-underline-offset: 0.2rem;
--link-underline-thickness: max(0.1rem, 1px);
}
body[data-theme="ocean"] {
--alpha-link-visited: 0.85;
--color-bg-body: 82, 179, 201;
--color-bg-select: 123, 203, 222;
--color-border-select: 0, 43, 77;
--color-link: 0, 43, 77;
--color-sponsor-hearts: 0, 43, 77;
--color-text: 0, 43, 77;
--icon-select: var(--icon-select-ocean);
--link-underline-offset: 0.3rem;
--link-underline-thickness: 0.2rem;
}
With this approach, every time the data-theme
on body
changes to a known
value (moon
, forest
, or ocean
), the CSS variables for that theme get used,
and boom! You have theming! Great!
But what happens if they refresh the page…?
Oh no! The <select>
isn’t populated with what the reader selected, and their
theme isn’t the one they chose, either!
No worries! Let’s get access to the prefTheme
as the page is loading:
window.site = {
prefTheme: getPrefTheme(),
// ...
}
function getPrefTheme() {
const localPrefTheme = localStorage.getItem('prefTheme');
/*
* Make sure only our current themes are the ones that
* can be set.
*
* Note: I should source the theme names elsewhere for a single
* source of truth, but I'll figure that out another time. The
* `<select>` isn't available at this point in the rendering
* for us to look at the `<option>`s...
*/
if (['moon', 'forest', 'ocean'].includes(localPrefTheme)) {
return localPrefTheme;
} else {
return matchMedia('(prefers-color-scheme: dark)').matches
? 'moon'
: 'ocean';
}
}
window.site.setTheme(window.site.prefTheme);
When the page is loading, we try to get the prefTheme
from localStorage
, and
if the value doesn’t match any of our current themes, then we use
matchMedia
to figure out if we should serve up a dark or light theme to the user.
Once we have a theme value stored, we go ahead and call
window.site.setTheme(window.site.prefTheme)
just in case to make sure we have
something stored for next time. We could do this in getPrefTheme()
in the
matchMedia
logic branch, but doing so feels a little dirty to me.
Lastly for themes, we need to use window.site.prefTheme
to select the correct
dropdown option, and we can do that by sliding the work in where we have our
<select>
’s addEventListener
:
if (themeEl) {
themeEl
.querySelector(`[value="${window.site.prefTheme}"]`)
.selected = 'selected';
themeEl.addEventListener('change', e => {
window.site.setTheme(e.target.value);
});
}
If you got lost along the way, no worries! Here’s a recap of all the code we did, and if you want to see it in action, be sure to check out the code used in the example.
<body data-theme="moon">
<script>
(() => {
window.site = {
prefTheme: getPrefTheme();
setTheme: (name) => {
document.body.setAttribute('data-theme', name);
localStorage.setItem('prefTheme', name);
},
};
function getPrefTheme() {
const localPrefTheme = localStorage.getItem('prefTheme');
if (['moon', 'forest', 'ocean'].includes(localPrefTheme)) {
return localPrefTheme;
} else {
return matchMedia('(prefers-color-scheme: dark)').matches
? 'moon'
: 'ocean';
}
}
window.site.setTheme(window.site.prefTheme);
})();
</script>
<main>
<h1>Static site theming example</h1>
<form>
<label for="select-theme">Theme</label>
<select data-select-theme="" id="select-theme">
<option value="moon">That's No Moon</option>
<option value="forest">Forest</option>
<option value="ocean">Ocean</option>
</select>
</form>
</main>
<script async>
(() => {
const themeEl = document.querySelector('[data-select-theme]');
if (themeEl) {
themeEl
.querySelector(`[value="${window.site.prefTheme}"]`)
.selected = 'selected';
themeEl.addEventListener('change', e => {
window.site.setTheme(e.target.value);
});
}
})();
</script>
</body>
Thanks for reading!
— Robert
This quote helped me wrap my head around ways we might inadvertantly dehumanize one another by assigning those around us rigid roles in the context of our daily lives and businesses. Like removing extra dough around a cookie cutter baking shape, we may discard from others that wealth of life and creativity that makes us human, leaving behind only the shapes we expect to see, and this limits and devalues ourselves and everyone around us.
Machines are everywhere and, depending on where you live, are involved in nearly everything we do. We give them commands, and they do something. When a part is broken, we replace that part. When they no longer suit our needs or are broken enough, we discard them. If we are exposed to this thinking every day for most of our lives, how does that affect how we treat and interact with people around us? Through force of habit, do we unintentionally treat others similarly to how we treat machines? Have we always done this, but our current technology makes this more ingrained since it is all-engrossing and demands our constant attention?
The next time you find yourself about to eliminate someone’s “role” at work and lay them off, ignore someone bagging your groceries, grow impatient with your music teacher’s stories, or disregard someone’s ideas because of their station, try to stop and remember that the person before you is a vibrant, autonomous, wonderful being that brings their entire lifetime of valuable experiences before you and exists far outside the box you and/or society put them in.
]]>The GHCup tool is the official installer for core Haskell tools: cabal, stack, haskell-language-server, and ghc.
I usually use Haskell through Nix (I’m liking devenv.sh, too), and I’ve also used it through Docker, but I was frustrated with build times and wanted to try the official Haskell way.
Unfortunately, I had a rough time trying to use GHCup on a macOS M1 (Ventura 13.2.1), so I documented trying to build a small Haskell project of mine, slugger, with it.
I use Homebrew for installing all sorts of CLI tools and apps for macOS (here’s my personal Brewfile).
While I will use it for something else later in this guide, I could not get
ghcup
to work properly when installed via Homebrew, and trying to upgrade
GHCup through its interface conflicted with the Homebrew install. Instead, I
will use the installer found on the GHCup page.
The example library I tried building was my URI slug library, slugger.
I like to keep my $HOME
directory clean by having tools adhere to the XDG
spec.
I read that if I wanted GHCup to use XDG
, I needed to export this variable in
the shell where the installer was going to run:
export GHCUP_USE_XDG_DIRS="true"
Here are my XDG environment variables.
Since I always want this to be true, I include that in my .zshenv
dotfile
just in case.
Next, I installed GHCup:
curl --proto '=https' --tlsv1.2 -sSf https://get-ghcup.haskell.org | sh
This is an interactive installer, so there was a bit of output and questions.
Tip: if you run this installer, make sure you read the messages.
The script asked if it could append something to the end of my .zshrc
file. I prefer to own my environment setup, so I let it do its thing, inspected
the file to make sure it looked good, then I changed the sourcing code into a
style I prefer:
ghcup_script_path="${XDG_DATA_HOME}/ghcup/env"
[[ -f "${ghcup_script_path}" ]] && source "${ghcup_script_path}"
This adds some Haskell bin-related directories to $PATH
if they aren’t already
there.
Once this was all done, I opened a new shell window and ran
ghcup tui
TUI is an acronym for “terminal user interface”.
I used the interface to install the recommended tool versions, and this was really easy! Well done, GHCup crew.
Then I went to go see if I could build slugger
.
When I went to the slugger
project directory, I ran cabal v2-build
, and some
LLVM errors printed to the screen.
Notably:
Warning: Couldn’t figure out LLVM version! Make sure you have installed LLVM between [9 and 13]
Remember how I said to make sure you read the installer messages? Yeah. I didn’t.
On Darwin M1 you might also need a working llvm installed (e.g. via brew) and have the toolchain exposed in the PATH.
Update: User bgamari on lobste.rs had a valuable insight into why installing LLVM is recommended by GHCup.
As suggested by the warnings above, I added brew llvm@9
to my Brewfile
,
installed it, and tried to cabal v2-build
the slugger
project.
That didn’t work (same sort of issue).
I tried llvm@10
, llvm@11
, and llvm@12
.
None of those worked, either! Would llvm@13
work? Maybe, maybe, maybe…
Update: This section may not be necessary. I went back, disabled this option, and I’m able still able to build the library. I don’t recall this being my experience the first time around, though.
It seems none of these will work if ghc
doesn’t know to use LLVM.
I keep a cabal config file in my
dotfiles and it
had a section, program-default-options
, that contained a ghc-options
key for
passing flags to ghc.
Here’s how I told GHC about LLVM:
program-default-options
ghc-options: -fllvm
There’s more information about that on the Haskell GHC Backends doc.
Did that make a difference? Yep!
Aha! A different error.
cabal-3.6.2.0 Missing dependencies on foreign libraries:
Missing (or bad) C libraries: icuuc, icui18n, icudata
This one stemmed from trying to build a dependency, text-icu, and it seemed I was missing some libraries it expected to find on the OS.
I saw some references on GitHub issues to the icu4c
tool, but I was luckily
able to find this archived “Missing dependency on a foreign library”
guide
that simply told me what to do:
brew install icu4c
If you’re using stack, add this to ~/.stack/config.yaml
:
extra-include-dirs:
- /usr/local/opt/icu4c/include
extra-lib-dirs:
- /usr/local/opt/icu4c/lib
Unfortunately, none of this worked out of the box for me for two reasons:
stack
/opt/homebrew/
for Apple Silicon—not /usr/local/
But those config options looked exactly the same as the recommendation from the build warning above, and that gave me some things to try:
If the libraries are already installed but in a non-standard location then you can use the flags
--extra-include-dirs=
and--extra-lib-dirs=
to specify where they are.
It turns out that my cabal.conf
file had extra-include-dirs
and
extra-lib-dirs
in it, so I didn’t need to pass paths every time I tried to
build with cabal.
I don’t regularly edit cabal config files, so I took the stack
YAML config
above and tried it:
extra-include-dirs:
- /opt/homebrew/opt/icu4c/include
extra-lib-dirs:
- /opt/homebrew/opt/icu4c/lib
Nope, that didn’t work. I tried indenting the -
to see if the config file
liked that.
Nope.
While this config file might, at a glance, resemble YAML, it isn’t—it seems to
resemble (or even be) a .cabal
file (email me if
you know, please!). Here was a correct way to write them:
extra-include-dirs:
/opt/homebrew/opt/icu4c/include
extra-lib-dirs:
/opt/homebrew/opt/icu4c/lib
With high hopes, I ran cabal v2-build
again, and it worked!
I was successfully able to build my little library and test it out with cabal
.
There are a number of places here where, if I’d have paid closer attention to (admittedly helpful) walls of text, I’d have been led to solutions faster. That is unquestionably my fault!
That said, the errors don’t cover everything you have to do (like the -fllvm
GHC flag), and this overall experience on macOS was rough for me.
I am grateful for all the effort put into GHCup, and I know it takes time and money to make things simple.
For now, even though Nix’s story isn’t one of simplicity, either, I’m going to mostly stick with building Haskell projects that way. However, I’ll keep my options open and periodically try things the GHCup way, as well.
Thanks for reading!
— Robert
We will be working with the hakyll-nix-template, so go ahead and pull that up in a new browser tab. Its README also contains info on all the features that are provided.
If you don’t have nix, follow the nix installation instructions.
Once you have nix installed, follow the nix flakes setup instructions, and then I highly recommend installing cachix, as well.
If it helps, here is my install_nix
bash
function,
and here is my ${XDG_CONFIG_HOME}/nix/nix.conf
file (note: on
macOS, this will likely be ~/.config/nix/nix.conf
). Feel free
to copy the conf file, and just remove https://rpearce.cachix.org
from
substituters
and rpearce.cachix.org-1:...=
from the trusted-public-keys
(or replace with your own cache from cachix!).
While you’re at it, we aren’t using devenv.sh nor nix-direnv in this example, but you should check them out later, too.
From the hakyll-nix-template page, click “Use this template” and then select “Create a new repository” from the popover menu.
Next, create a new repository from the template, filling in the details you want for the repo.
After creating the repository, click the “<> Code” button, then choose your method of cloning the repository.
Once you’ve chosen your preferred cloning command and ran that in your terminal,
cd
into the directory.
Alright! We’re ready to build and personalize our project.
Run nix build
, answer any substituters trust prompts, and then go do something
else for a while. The first run takes a while, and how long it takes depends on
connection speed, processing speed, and — most importantly — what caches you
have set up in nix.conf
(and/or flake.nix
).
Once that is all done, you’ll have a brand new result/
directory available
that is a symlink to /nix/store/<HASH>-website/
. For this blog, it looks like
this:
result/
└── dist/
├── CNAME
├── _config.yml
├── announcing-react-medium-image-zoom-v4.html
├── asynchronously-loading-scripts.html
├── atom.xml
├── be-better.html
├── behaviour-your-team.html
├── berlin.html
├── build-your-team-an-accessible-shareable-component-library.html
├── catch-low-hanging-accessibility-fruit-with-axe-core.html
├── chief.html
├── css
│ ├── article.css
│ ├── default.css
│ └── home.css
├── delegate-dont-dump.html
├── ...
This is your static output! While you could run cd result/dist
and either npx serve .
or python -m SimpleHTTPServer
, let’s do this the
hakyll-nix-template
way:
λ nix run . watch
Listening on http://127.0.0.1:8000
Initialising...
Creating store...
Creating provider...
Running rules...
Checking for out-of-date items
Compiling
Success
Lovely! If we navigate to http://127.0.0.1:8000, we’ll see the default webpage included in the project.
In a new terminal pane or window, run nix develop
(note: this may take a
while the first time):
λ nix develop
[hakyll-nix]λ
When you have [hakyll-nix]λ
as your prompt, you know that you’re in a nix
shell. This comes preloaded with most of your existing CLI tools, plus
cabal
, ghc
, haskell-language-server
, and hlint
. If you want it to be
exactly your environment plus the nix develop shell, check out
nix-direnv.
At this point, if you’re using Vim, for example, you can run vim .
and open
the project up with access to the aforementioned tools.
Now, it’s time to customize the project for you.
First, go back to your window where you can nix run . watch
and cancel that;
e.g., press ctrl + c
.
Next, using your editor, open ssg/src/Main.hs
, and read over the
PERSONALIZATION
section near the top:
------------------
-- PERSONALIZATION
mySiteName :: String
mySiteName = "My Site Name"
mySiteRoot :: String
mySiteRoot = "https://my-site.com"
myFeedTitle :: String
myFeedTitle = "My Site"
myFeedDescription :: String
myFeedDescription = "My Site Description"
myFeedAuthorName :: String
myFeedAuthorName = "My Name"
myFeedAuthorEmail :: String
myFeedAuthorEmail = "me@myemail.com"
myFeedRoot :: String
myFeedRoot = mySiteRoot
This area contains all the high level, site-based customization text and root URLs for you to update. Go ahead and do that.
Below this area, you’ll find the CONFIG
section:
-- Default configuration: https://github.com/jaspervdj/hakyll/blob/cd74877d41f41c4fba27768f84255e797748a31a/lib/Hakyll/Core/Configuration.hs#L101-L125
config :: Configuration
config =
defaultConfiguration
{ destinationDirectory = "dist"
, ignoreFile = ignoreFile'
, previewHost = "127.0.0.1"
, previewPort = 8000
, providerDirectory = "src"
, storeDirectory = "ssg/_cache"
, tmpDirectory = "ssg/_tmp"
}
where
ignoreFile' path
| "." `isPrefixOf` fileName = False
| "#" `isPrefixOf` fileName = True
| "~" `isSuffixOf` fileName = True
| ".swp" `isSuffixOf` fileName = True
| otherwise = False
where
fileName = takeFileName path
This section specifically deals with your hakyll config. If you want to change the development server port, host, content, source directory, what files are or aren’t ignored, and some caching things, then you can do so here.
The rest of the file is all related to hakyll and the build, so if you know hakyll already, this should feel familiar, and feel free to customize it however you like.
Do note that any changes you make inside of ssg/
means you’ll need to turn
your dev server off and on again.
Now that we’ve customized our config, turn the dev server back on with nix run . watch
. It’s time to add our first post!
Navigate to the src/posts/
folder and add a new markdown file with this naming
format:
2023-02-10-my-real-post.md
As you can see from the other posts already in this directory, we have post metadata (a.k.a. front-matter) and then the post content follows that. For example:
---
author: "Robert Pearce"
authorTwitter: "@RobertWPearce"
desc: "Welcome to the fun, probably over-engineered world of nix and haskell to make a website"
image: "./images/some-image.webp"
keywords: "hakyll, nix, haskell, static site generator"
lang: "en"
title: "Today, I used hakyll-nix-template"
---
Hello, world! I am here!
…but customize this with your own content.
Save the file and watch your dev server reload and pick it up! If you refresh your browser, you should now see your post on the index page.
The author
, desc
, title
, and other meta fields from the prior section are
all completely customizable by you! These are fields that you can change,
remove, or add more of, and they are used in your HTML templates in the
src/templates/
folder.
If you open src/templates/post.html
, you’ll see something like this:
<main>
<article>
<header>
<h1>
<a href=".$url$">$title$</a>
</h1>
<div>
<small>$date$</small>
$if(updated)$
<small>(updated: $updated$)</small>
$endif$
</div>
</header>
<section>
$body$
</section>
</article>
</main>
This is all a part of hakyll, but I’ll cover some of this here to make it easier to understand all in one place.
See $title$
? That comes from our post metadata, and updated
looks like it’s
an optional field from our metadata, but where does $date$
come from? Or
$body$
?
In ssg/src/Main.hs
, you’ll see postCtx
:
postCtx :: Context String
postCtx =
constField "root" mySiteRoot
<> constField "siteName" mySiteName
<> dateField "date" "%Y-%m-%d"
<> defaultContext
This is a post context that gets built up and supplied to the template. Hakyll
has a special dateField
helper
that parses a date from your post filename if it begins with a date. It also has
defaultContext
which handles things like your post/web page’s body content.
What is significant about this example is that this is a place where you can
pass in values at a global level; note that constField
is including some of
the personalization fields you filled out earlier. Passing those in the right
context gives your templates access to them.
You can read more on this from jaspervdj, themself: https://jaspervdj.be/hakyll/tutorials/04-compilers.html
Before we wrap this section up, you should know that you can also add as many templates as you like, as well, and reference them in other templates using this format:
<!-- Inside templates/post.html... -->
<section class="section-subscribe">
$partial("templates/subscribe.html")$
</section>
You will inevitably want to copy static files from your source code into your
outputted build, and this is easily with hakyll’s copyFileCompiler
in
ssg/src/Main.hs
, just inside the main
function.
main :: IO ()
main = hakyllWith config $ do
forM_
[ "CNAME"
, "favicon.ico"
, "robots.txt"
, "_config.yml"
, "images/*"
, "js/*"
, "fonts/*"
]
$ \f -> match f $ do
route idRoute
compile copyFileCompiler
Each file or folder glob here exists inside the src/
directory. If you have
something you want copied over to the build, this is the place to do it.
If you find you need to ignore a certain file or extension, consult the
ignoreFile'
function in the config
and add your problematic file, prefix, or
extension to the guard. For example, my macOS likes to add .DS_Store
everywhere, so I did this:
ignoreFile' path
| ".DS_Store" == fileName = True -- this line
| "." `isPrefixOf` fileName = False
| "#" `isPrefixOf` fileName = True
| -- ...
There GitHub action workflow can be found in .github/workflows/main.yml
. There
are two jobs here: build-nix
and deploy
, and deploy
only runs on the
main
branch.
build-nix
jobThis is the main job, and it does four things:
nix-build
nix-build
for use later (your website
output)deploy
jobWhen code is pushed to the main
branch, the deploy
job will:
build-nix
jobgh-pages
branch and
deploy your code to that branchCACHIX_AUTH_TOKEN
You may have noticed a {{ secrets.CACHIX_AUTH_TOKEN }}
used in this file. Here
are the steps to setting this up:
Settings
tab, then click on Secrets and Variables
, then Actions
, and add a repository secret called
CACHIX_AUTH_TOKEN
where you set that variable. At present, a direct link to
this is https://github.com/youruser/yoursite.com/settings/secrets/actionsWhile you’re in the Settings
tab, go to the Pages
page, enable GitHub Pages,
set the Source
to Deploy from a branch
, set that branch to gh-pages
, and
make sure the directory for that branch is / (root)
.
Follow the GitHub Pages custom domain guide for heaps of info on how to deploy your site to your web domain.
When a CSS or JS file changes, we need a way to break browser caches to ensure they get the latest version. The way to do this is to generate a hash of that file’s contents, generate a file with that content hash in the filename when building, and then make sure any output that references that CSS or JS file reflects this updated filename, as well.
I have no idea how to do this yet, but I’ll figure it out!
See Tony Zorman’s post on pygmentising hakyll for details on some issues with the skylighting library. I’ll likely follow this post in order to switch up the syntax highlighting to something better, or at least allow people to work with whatever they want.
Enjoy!
Note: if sarcasm and self-deprecation aren’t your thing, you can skip to the real-talk takeaways.
Here is the much prettier PDF version that is also useful for sending to your teammates or using in your own lunch-n-learn tech talk.
Thanks to KronisLV on the orange site for helping me fix an issue where the PDF was accidentally auto-downloading in Firefox.
How to Lose Functional Programming at Work - PDF
const processData = composeP(syncWithBackend, cleansePII, validateData)
// * What arguments and their types are expected here?
//
// * If each function is written like this, how can
// one suss out what data are flowing where?
//
// * How hard is this going to be to debug?
// Use this everywhere: `(x) => (console.log(x), x)`
Oh, so point-free style programming is the problem? Not so fast:
async function processData(data) {
await validateData(data)
const cleansedData = cleansePII(data)
await syncWithBackend(cleansedData)
return data
}
// or for the Promise-chainers…
const processData = data =>
validateData(data)
.then(cleansePII)
.then(syncWithBackend)
.then(() => data)
Do keep telling yourself that any of these 3, on their own, are easy for your teammates to work with after 3 months.
Deprive your team of this clarity and helpful auto-completion:
// NOTE: this is an untested, small example
/**
* @typedef {Object} ReportingInfo
* @property {("light"|"dark")} userTheme - Current user's preferred theme
* @property {string} userName - Current user's name
* @property {UUID} postId - The current post's ID
*/
/**
* Validates that the reporting data (current user site prefences and post info)
* is OK, removes personally identifiable information, syncs this info with the
* backend, and gives us back the original data.
*
* @param {ReportingInfo} data - The current user's site preferences and post info
* @returns {Promise<ReportingInfo>} - The original reporting data
*/
const processData = data => // …
Truly believe, in your heart, that you can write a pile of blog posts, collect a bunch of other great learning resources, hand them all to a new FP learner, recommend they read as much as they can then come back with questions, and expect them to come out the other side at all.
Conversely, spend all your time and energy on a couple of individuals, neglect the others, fail to write any useful learnings down, and forget to encourage these initiates to turn around and help teach their other colleagues, in turn.
Instead, if you keep it to yourself, other teams won’t get to contribute and probably improve the state of things.
Watch the video, “Point-Free or Die: Tacit Programming in Haskell and Beyond”, by Amar Shah
Contrived example:
import { __, any, lt } from 'ramda'
const anyLt0 = any(lt(0, __)) // hint: this has a bug in it
anyLt0([1, 2, 3]) // true — ugh…
// vs. the probably pretty simple…
const anyLt0 = numbers => numbers.some(n => n < 0)
anyLt0([0, 1, 2, 3]) // false
anyLt0([0, 1, 2, -1, 3]) // true — looks good
// 👆 should we resist eta-converting this?!
// …
// NOT ON MY WATCH
const any = fn => array => array.some(fn)
const isLtN = x => n => x < n
const isLt0 = isLtN(0)
const anyLt0 = any(isLt0)
anyLt0([1, 2, 3]) // true — ugh; the bug is back
Real, but altered, example:
const finishItems = compose(
flip(merge)({ isDone: true, amtComplete: 100 }),
over(
lensProp('indexedObjects'),
mapVals(
compose(
over(lensProp('indexedObjects'), mapVals(assoc('isDone', true))),
assoc('isDone', true)
)
)
)
)
I was at Sandi Metz’ RailsConf 2014 Chicago talk, All the Little Things, where she blew my mind with the simplicity of “preferring duplication over the wrong abstraction”. Two years later, she followed it up with some great blog commentary, The Wrong Abstraction.
But in this case, dilute your core business logic to broad generalizations that can be extracted and abstracted over and over, fail to understand category theory enough for this to be useful, and be the only one who knows how these abstractions work.
You’ll know you’ve lost people when normally thorough PR reviews now look like, “👍”.
Make sure that people coming into the project have your old code patterns to emulate that you cringe looking at years later but never made the time to update.
While you could allocate investment time to this or reading up on how to improve your technical leadership skills, spend that time making new features, instead.
trampoline
function over
the issue to make it go away and not blow out your call stackcurry
and compose
functions by default, meaning
you’ll have to go the extra mile like Brian does in Debugging
functional to
prevent the issues described by Thai in Partially-applied (or curried)
functions could obfuscate the JavaScript stack trace
(Thai’s ultimate recommendations are “use a typed language that guarantees that
your functions will never receive an invalid data” or “just don’t go
overboard with pointfree style JavaScript”).map g . map f
into a
single map
thanks to composition, knocking out the work in one go at
runtime. While .map(…).map(…).map(…)
seems to be optimized pretty ok
in JS runtimes, you’re still asking it do N times the work, and you may not
realize it. Oops.On the surface, this isn’t so difficult to read…
// handler for POST /posts
import { createPost } from 'app/db/posts'
import { authenticateUser, authorizeUser } from 'app/lib/auth'
import { trackEvent } from 'app/lib/tracking'
const validateRequestSchema = payload => { /* … */ }
export const handleCreatePost = curry(metadata =>
pipeP(
authenticateUser(metadata),
authorizeUser(metadata),
validateRequestSchema,
createPost(metadata),
tapP(trackEvent('post:create', metadata)),
pick([ 'id', 'authorId', 'title' ])
)
)
Did you catch that this expects 2 arguments? Did you also know that
authenticateUser
ignores the 2nd argument sent to it? How would you? And what
about trackEvent
? Does it receive the payload
, or does createPost()
return
post-related data?
Let’s write this another way:
export async function handleCreatePost(metadata, payload) {
await authenticateUser(metadata)
await authorizeUser(metadata, payload)
await validateRequestSchema(payload)
const post = await createPost(metadata, payload)
await trackEvent('post:create', metadata, payload)
return {
id: post.id,
authorId: post.authorId,
title: post.title,
}
}
I’m not saying that option #2 is an awesome handler, but if you want to make it trickier for people, go with option #1.
const setBookReadPercentByType = (contentType, statusObject) =>
assoc(
'readPercent',
pipe(
prop('subItems'),
values,
filter(propEq(contentType, 'chapter')),
length,
flip(divide)(compose(length, keys, prop('subItems'))(statusObject)),
multiply(100),
Math.round
)(statusObject),
statusObject
)
Do have 8+-ish different patterns for function composition
// 👇 These 4, plus Promisified versions of them,
// plus combinations of them all used at once;
// doesn't include ramda's pipeWith and composeWith
// compose
const getHighScorers =
compose(
mapProp('name'),
takeN(3),
descBy('score')
)
// pipe
const getHighScorers =
pipe(
descBy('score'),
takeN(3),
mapProp('name')
)
// composeWithValue
const getHighScorers = players =>
composeWithValue(
mapProp('name'),
takeN(3),
descBy('score'),
players
)
// pipeWithValue
const getHighScorers = players =>
pipeWithValue(
players,
descBy('score'),
takeN(3),
mapProp('name')
)
// …but then now mix and match them with actual,
// real-life business logic.
Ensure your team is surprised by all of the following words when debugging or altering your code in the pursuit of their own work tasks:
Task
, Maybe
, Either
, Result
, Pair
, State
bimap
chain
bichain
option
coalesce
fork
sequence
ap
map
— and I don’t mean Array.prototype.map
, nor a new Map()
, nor a
key/value objectInstead, and in the name of immutability, use data pipelines in your app to apply changes to your data, one transformation at a time, and accidentally do as many key/value iterations and memory allocations as possible. 😬
What you have here works great, but what could this look like if we flipped all the function arguments around, removed all these intermediate variables, and mapped these operations over an
Either
?
or
I noticed you’re explicitly constructing these objects in their functions. If you were to use <UTILITY-FUNCTION>, you could declare the shape of your outputted object and use functions as the values to look up or compute each value given some data.
Much of the backwards recommendations here can be, on the surface, written off as symptoms of inexperience, a lack of technical leadership from me, and obviously not the right paths.
But I think it’s something deeper than those easy explanations.
Most things in life need to be tended to in order for them to go the ways that we’d like them to; our relationships, our physical & mental health, our gardens. With most of these things in life, we strive to purposefully sculpt our futures.
However, there are many things that we accidentally sculpt. For example, if the fastest way from your back door to your garden is through your grassy yard, the simplest thing is to walk over the grass to get there. It makes sense for a while, but over time, your stepping on the grass carves a path that you never intended to create — it was an unintended consequence of your gardening.
This same thing happens with our minds and in our work. If we’re not paying attention to the big picture, the path of least resistance can carve canyons.
In my case, here, not taking responsibility of a path I helped create, coupled with persistent imposter syndrome and a feeling I needed to ship features and just look out for myself, instead of making time for re-evaluation, helped lead to the difficulties above for others and a loss of “higher” functional programming in a pretty good workplace that gives teams the freedom to choose their own tools.
But all is not lost! The core tenets of FP seem to remain:
React
doesn’t count), no inheritance,
map
/filter
/reduce
, etc.It seems a happy balance has been collectively decided on, and I’m excited to see where it goes. Perhaps, this time around, I’ll be better.
Thanks for reading,
Robert
This is part 6 of a multipart series where we will look at getting a website / blog set up with hakyll and customized a fair bit.
In this post we’re going to create a new hakyll site from scratch with a caveat: we will do just about everything with nix in order to guarantee reproducibility for anyone (or anything) using our project. There are also two bonuses that we will inherit simply because we are using nix: * we will not need to rely on global package installs (apart from nix, of course) * we will be able to easily patch any package problems; for example, if some of hakyll’s dependencies are not available in nixpkgs, we can patch hakyll to get it to work.
Here is the example repository with what we’re going to make: https://github.com/rpearce/hakyll-nix-example
Note: this post assumes that you have installed nix on your system.
Make a new project with release.nix
, default.nix
, and shell.nix
, and get
into its pure nix shell environment:
λ mkdir hakyll-nix-example && cd $_
λ echo "{ }: let in { }" > release.nix
λ echo "(import ./release.nix { }).project" > default.nix
λ echo "(import ./release.nix { }).shell" > shell.nix
λ nix-shell --pure -p niv nix cacert
We won’t have to touch default.nix
nor shell.nix
again, for we are
delegating their responsibilities to the release.nix
file that we’ll add
more to in a moment.
Note: we require nix
and cacert
when running a pure nix-shell
with niv
because of an issue (https://github.com/nmattia/niv/issues/222).
Now that we’re in the nix shell, initialize niv
and specify your nixpkgs owner, repository, and branch to be whatever you want:
[nix-shell:~/projects/hakyll-nix-example]$ niv init
[nix-shell:~/projects/hakyll-nix-example]$ niv update nixpkgs -o NixOS -r nixpkgs-channels -b nixpkgs-unstable
[nix-shell:~/projects/hakyll-nix-example]$ exit
Update your release.nix
file with the following:
let
sources = import ./nix/sources.nix;
in
{ compiler ? "ghc883"
, pkgs ? import sources.nixpkgs { }
}:
let
inherit (pkgs.lib.trivial) flip pipe;
inherit (pkgs.haskell.lib) appendPatch appendConfigureFlags;
haskellPackages = pkgs.haskell.packages.${compiler}.override {
overrides = hpNew: hpOld: {
hakyll =
pipe
hpOld.hakyll
[ (flip appendPatch ./hakyll.patch)
(flip appendConfigureFlags [ "-f" "watchServer" "-f" "previewServer" ])
];
hakyll-nix-example = hpNew.callCabal2nix "hakyll-nix-example" ./. { };
niv = import sources.niv { };
};
};
project = haskellPackages.hakyll-nix-example;
in
{
project = project;
shell = haskellPackages.shellFor {
packages = p: with p; [
project
];
buildInputs = with haskellPackages; [
ghcid
hlint # or ormolu
niv
pkgs.cacert # needed for niv
pkgs.nix # needed for niv
];
withHoogle = true;
};
}
Don’t worry, we’ll circle back to what we just did.
Create a hakyll.patch
diff file:
λ touch hakyll.patch
Bootstrap the hakyll project (we won’t ever need this again):
λ nix-shell --pure -p haskellPackages.hakyll --run "hakyll-init ."
Build the project and --show-trace
just in case something goes wrong:
λ nix-build --show-trace
Run the local dev server:
λ ./result/bin/site watch
Navigate to http://localhost:8000 and see your local dev site up and running!
release.nix
FileLet’s break down what we copied and pasted into release.nix
.
let
sources = import ./nix/sources.nix;
in
{ compiler ? "ghc883"
, pkgs ? import sources.nixpkgs { }
}:
# ...
The let
gives us the space to define an attribute (variable), and it is here
that we import our sources.nix
file that was generated by niv
. The in
block defines a function parameter with two attributes, compiler
and sources
,
that each have defaults (when there’s a :
, that means what comes next is a
function body or another function argument). For the compiler
, we will use
this version to compile all of the Haskell packages that we interact with. For
the pkgs
, we default to using our pinned version of nixpkgs
, but this is
overridable.
# ...
let
inherit (pkgs.lib.trivial) flip pipe;
inherit (pkgs.haskell.lib) appendPatch appendConfigureFlags;
# ...
Our new let
falls within the function we created above, and we then state that
we would like to inherit some nice functions from pkgs.lib.trivial
and pkgs.haskell.lib
.
The flip
and pipe
functions are standards in functional programming, but
I’ll share a short recap:
* flip
takes a function a -> b -> c
and flips the accepted arguments to act like
b -> a -> c
. Its definition is flip = f: a: b: f b a;
– it takes a
function, then a
, then b
, and then it applies a
and b
in reversed
(flipped) order.
* pipe
establishes a set of functions that you can apply data to, one after the
other. Think of bash pipes: cat blog_post.txt | grep nix
.
let
# ...
haskellPackages = pkgs.haskell.packages.${compiler}.override {
overrides = hpNew: hpOld: {
hakyll =
pipe
hpOld.hakyll
[ (flip appendPatch ./hakyll.patch)
(flip appendConfigureFlags [ "-f" "watchServer" "-f" "previewServer" ])
];
hakyll-nix-example = hpNew.callCabal2nix "hakyll-nix-example" ./. { };
niv = import sources.niv { };
};
};
# ...
This uses our pinned (or overridden) nixpkgs
to create our own
haskellPackages
for a specific Haskell compiler
version.
For hakyll
, we need to make sure it gets compiled with the watchServer
and
previewServer
flags, or we won’t be able to use its local dev server. We also
provide an optional patch file (git diff > hakyll.patch
file) that we can
build hakyll
with if there are any changes to the project that we need to make.
Patch files can be empty when no patches are required, but if you do need to
patch something, here is an example hakyll.patch
file:
diff --git a/hakyll.cabal b/hakyll.cabal
index fcded8d..9746f20 100644
--- a/hakyll.cabal
+++ b/hakyll.cabal
@@ -199,7 +199,7 @@ Library
If flag(previewServer)
Build-depends:
wai >= 3.2 && < 3.3,
- warp >= 3.2 && < 3.3,
+ warp,
wai-app-static >= 3.1 && < 3.2,
http-types >= 0.9 && < 0.13,
fsnotify >= 0.2 && < 0.4
The hakyll-nix-example
attribute is specifically for our Haskell project in
order for us to be sure our project is compiled with our desired compiler
version. We leverage the callCabal2nix
tool to handle automatically converting
our hakyll-nix-example.cabal
file into a nix
derivation for our build.
Lastly, we ensure that the niv
we are using in the nix-shell
is our pinned
niv
that niv
itself generated.
let
# ...
project = haskellPackages.hakyll-nix-example;
in
{
project = project;
# ...
The project
attribute is what our default.nix
will use when being called
with tools like nix-build
. All we do is access our hakyll-nix-example
attribute from our customized haskellPackages
.
let
# ...
in {
# ...
shell = haskellPackages.shellFor {
packages = p: with p; [
project
];
buildInputs = with haskellPackages; [
ghcid
hlint # or ormolu
niv
pkgs.cacert # needed for niv
pkgs.nix # needed for niv
];
withHoogle = true;
};
}
Exactly like default.nix
uses the project
attribute, shell.nix
is looking
for a shell
attribute to define everything it needs when running nix-shell --pure
. We use shellFor
, which comes with the nixpkgs
Haskell tools, and we
provide it a few attributes:
* the packages
attribute holds our project
package and any other nixpkgs
that you would like to have built when entering the shell
* the buildInputs
attribute holds all the tools that we’ll have available to
us while we’re in the shell; for example, you can run ghcid
and load your
Haskell code to test it out or run hlint
to lint your Haskell files
* withHoogle
gives us the ability to query https://hoogle.haskell.org
Using nix to build our project helps make development consistent and predictable; however, learning nix is not necessarily a breeze. The following articles directly contributed to my understanding that led to this post:
overrideAttrs
: https://nixos.org/nixpkgs/manual/#sec-pkg-overrideAttrsoverlays
: https://nixos.org/nixpkgs/manual/#chap-overlaysThank you for reading!
Robert
By the end of this post, you will be able to use TypeScript, React, Storybook, and more to provide a simple way to create accessible components that can be included in all of your projects.
If you’d like to skip to the code, here is the example component library we’re going to make: https://github.com/rpearce/example-component-library.
This is a big post that covers a lot of ground, so buckle up.
Components make up large parts of our applications. As projects age, components can become increasingly coupled with other components, business logic, and application state management tools like redux.
These components usually start out small, focused, and pure. As time passes and the imperative of timely code delivery takes its toll, these components become harder to compose, harder to reason about, and cause us to yearn for simpler, less-involved times.
Instead of rewriting those components in place and repeating the same process, consider extracting and developing each one in isolation in a library. This will allow you to keep each one’s surface area small and keep your business logic, state management, routing logic, etc., where it belongs: in your application.
With this scenario, a good intermediary step, before pulling components into their own project, would be to create a folder in your application for these components and set up a tool like storybook to house the individual examples and compositions of them.
Consider this exchange:
Them: You know that spinner/widget/dropdown/search thing we have over here? It looks and works great! We want the same thing over here and over here. How difficult is that?
Me: Those are different projects, and that is really more like 4 different components working together, so a) hard to do cleanly but good for the long-term or b) easy (for now) if I copy and paste.
Them: We need to ship.
Me: Okay, so copy and paste it is…
What’s special about this exchange is that both sets of concerns and perspectives are valid. Software stakeholders typically want and need to ship features and fixes quickly, and they usually want to maintain brand consistency across their ecosystems. Software developers at those companies want to be able to ship features and fixes and maintain brand consistency, but they are also aware of the cost of short-term decision making (this is a way of accruing technical debt).
We know that even the best code is useless to a business if there are no customers around paying to use it, but we also know that suboptimal tech decision making can grind projects to a halt over time, averting the stakeholder’s directive of shipping features and fixes quickly.
So what can we do to not only amend the scenario above but also make this undesired state unrepresentable in the future? We can start our projects with an accompanying component library! For existing projects, we can begin moving them in that direction.
Let’s first define how we are going to include our components in our project.
Component JavaScript can be imported in a few different ways:
// import from the main (or module) specification in
// package.json, depending on your bundler and its version
import { Circle } from 'mylib'
// straight from the ESModule build
import Circle from 'mylib/dist/esm/Circle'
// straight from the CommonJS build
import Circle from 'mylib/dist/cjs/Circle'
// straight from the Universal Module Definition build
import Circle from 'mylib/dist/umd/Circle'
Component CSS can be imported like this:
import 'mylib/dist/css/Circle/styles.css'
If you know you will use all of the components and wish to import all of their CSS at once:
import 'mylib/dist/css/styles.css'
The JS import is simple enough, but you might be wondering, “What’s the deal with importing CSS like this? I thought we were on to things like styled-components, emotion, CSS modules, etc?”
These tools are great if the consuming application can bundle up and inject the styles using the same instance of the tool, but can you guarantee each app will use these same styling tools? If so, by all means go that direction. However, if your library is injecting its own styles into the document at runtime, you will not only potentially run into style specificity / collision issues if you don’t have the application styles load last, but strict content security policies will potentially disallow the dynamically added styles from even being applied!
The solution? Go with the lowest common denominator: regular, vanilla CSS (or something that outputs regular, vanilla CSS). We’ll come back to this in the example component section.
It’s time to build the project! Here are the main tools we will use:
13.13.0
)pre-push
.
├── .storybook (1)
│ └── ...
├── dist (2)
│ └── ...
├── docs (3)
│ └── ...
├── examples (4)
│ └── ...
├── scripts
│ └── buildCSS (5)
├── source (6)
│ └── ...
├── .eslintignore
├── .eslintrc.js
├── .gitignore
├── .prettierrc.js
├── CHANGELOG.md (7)
├── LICENSE (8)
├── README.md
├── husky.config.js
├── jest.config.js
├── lint-staged.config.js
├── package.json
├── testSetup.ts
├── tsconfig.base.json (9)
├── tsconfig.cjs.json
├── tsconfig.esm.json
├── tsconfig.json
└── tsconfig.umd.json
.storybook/
– storybook examples configurationdist/
– compiled project outputdocs/
– compiled storybook examples outputexamples/
– add create-react-app
, gatsby
, and other example projects herescripts/buildCSS
– store build scripts here like this CSS-related onesource/
– where your project lives; we’ll dive into this in the next sectionCHANGELOG.md
– be a good teammate and document your library’s changes; very useful for your teams and useful if you decide to open source the projectLICENSE
– a good idea if you plan to open source; otherwise, put UNLICENSED
in your package.json
license fieldtsconfig.json
, et al – typescript build configs; we’ll dive into this in the project setup section.
└── source
└── ComponentA
├── __snapshots__
│ └── test.tsx.snap
├── index.tsx
├── stories.tsx
├── styles.css
└── test.tsx
└── ComponentB
└── ...
└── ComponentC
└── ...
├── index.ts
└── test.tsx
The component and everything to do with it are co-located in the
source/ComponentA/
folder:
* index.tsx
component file (and any additional component files)
* storybook stories
* CSS
* tests
This grouping of everything having to do with a component makes it very easy to find everything you need. If you would prefer a different setup, you can adjust the tool configurations however you like.
Each component is then exported from the main index.ts
file.
It’s now time to start the project from scratch and make this outline a reality!
To begin, let’s create the project and a package.json
file with some
project-related information:
$ mkdir example-component-library && cd $_
$ touch package.json
And in package.json
:
{
"name": "@yournpm/example-component-library",
"version": "0.1.0",
"description": "Example repository for a shared React components library",
"main": "dist/cjs/index.js",
"module": "dist/esm/index.js",
"repository": {
"type": "git",
"url": "git@github.com:yourgithub/example-component-library.git"
},
"homepage": "https://github.com/yourgithub/example-component-library",
"bugs": "https://github.com/yourgithub/example-component-library",
"author": "Your Name <you@youremail.com>",
"license": "BSD-3",
"keywords": [],
"tags": [],
"sideEffects": ["dist/**/*.css"],
"files": ["LICENSE", "dist/"],
"scripts": {},
"devDependencies": {},
"peerDependencies": {
"react": "*",
"react-dom": "*"
},
"dependencies": {}
}
Once you save that, run your build tool to make sure everything is ok:
$ npm install
Notably, we’ve set our main
field to dist/cjs/index.js
, the CommonJS build,
for compatibility with NodeJS environments because they don’t yet work well with
ESModules. We’ve set our module
field to look at dist/esm/index.js
, the
ESModule build. If you want to make use of the Universal Module Definition build
we’ll create later on, you can use the browser
field:
"browser": "dist/umd/index.js"
. Personally, if I build with webpack, I want
webpack to select the module
field over the browser
one because it will
always be of a smaller size, for the UMD builds are meant to be run in any of a
few different environments.
Also of importance is the sideEffects
field. If our library code was pure and
didn’t have side effects, we would set the value to false
, and build tools
like webpack would prune away all of the unused code. However, since we also are
exporting CSS, we need to make sure that it doesn’t get dropped by the build
tool, so we do that with "sideEffects": ["dist/**/*.css"]
.
Lastly, we know we’re going to be using React, so we can go ahead and set that
as a peerDependency
(it’s up to you to decide what versions of React you’ll
support).
We can now add TypeScript to our project with some compiler and project-related
options. We’ll also add some type definition libraries that we’ll use later, as
well as a dependency on tslib
to make
compiling our code to ES5 seamless.
$ npm install --save-dev --save-exact \
@types/node \
@types/react \
@types/react-dom \
typescript
$ npm install --save --save-exact tslib
$ touch tsconfig.base.json tsconfig.json
We will place our compilerOptions
in tsconfig.base.json
so that they can be
extended in all our different builds in the future:
{
"compilerOptions": {
"allowJs": false,
"allowSyntheticDefaultImports": true,
"declaration": true,
"esModuleInterop": true,
"importHelpers": true,
"jsx": "react",
"lib": ["es2020", "dom"],
"moduleResolution": "node",
"noImplicitAny": true,
"outDir": "dist/",
"sourceMap": false,
"strict": true,
"target": "es5"
}
}
Note that the importHelpers
flag tells tslib
whether it should be enabled or
not.
The tsconfig.json
will be used as a default to include our future source
directory:
{
"extends": "./tsconfig.base.json",
"include": ["source/**/*"]
}
We’ll add some more TypeScript-related packages when we get to the tools that need them, and we’ll add more TypeScript build configurations in the section on building our typescript.
Linting is a great way to have everyone adhere to the same set of rules for code style. For our project, we’re going to install a few tools to help us out.
$ npm install --save-dev --save-exact \
@typescript-eslint/eslint-plugin \
@typescript-eslint/parser \
eslint \
eslint-config-prettier \
eslint-plugin-jest \
eslint-plugin-jsx-a11y \
eslint-plugin-prettier \
eslint-plugin-react \
eslint-plugin-react-hooks \
husky \
lint-staged \
prettier
$ touch \
.eslintignore \
.eslintrc.js \
.prettierrc.js \
husky.config.js \
lint-staged.config.js
The .eslintignore
file will make sure we include files and folders that are
ignored by default (using the !
) and exclude files and folders that we don’t
care about linting.
!.eslintrc.js
!.prettierrc.js
!.storybook/
dist/
docs/
examples/
The .eslintrc.js
file is something you and your team will need to figure out
for yourselves, but here’s where I stand on the issues:
module.exports = {
env: {
browser: true,
es6: true,
jest: true,
node: true,
},
extends: [
'plugin:react/recommended',
'plugin:@typescript-eslint/recommended',
'prettier/@typescript-eslint',
'plugin:prettier/recommended',
'plugin:jsx-a11y/recommended',
],
parserOptions: {
ecmaVersion: 2020,
sourceType: 'module',
},
parser: '@typescript-eslint/parser',
plugins: ['jsx-a11y', 'react', 'react-hooks', '@typescript-eslint'],
rules: {
'@typescript-eslint/no-unused-vars': 'error',
'jsx-quotes': ['error', 'prefer-double'],
'jsx-a11y/no-onchange': 'off', // https://github.com/evcohen/eslint-plugin-jsx-a11y/issues/398
'no-trailing-spaces': 'error',
'object-curly-spacing': ['error', 'always'],
quotes: ['error', 'single', { allowTemplateLiterals: true }],
'react-hooks/exhaustive-deps': 'error',
'react-hooks/rules-of-hooks': 'error',
'react/prop-types': 'off',
semi: ['error', 'never'],
},
settings: {
react: {
version: 'detect',
},
},
overrides: [
{
files: ['*.js', '*.jsx'],
rules: {
'@typescript-eslint/explicit-function-return-type': 'off',
'@typescript-eslint/no-var-requires': 'off',
},
},
],
}
The .prettierrc.js
file defines your
prettier configuration:
module.exports = {
semi: false,
singleQuote: true,
}
We’re almost done with the linting! There are two files left.
For our husky.config.js
file, we’ll set it up to run lint-staged
before we
push our code to our repository:
module.exports = {
hooks: {
'pre-push': 'lint-staged',
},
}
And for lint-staged.config.js
, we’ll specify that we want to run eslint --fix
on our staged files:
module.exports = {
'*': ['eslint --fix'],
}
Now that we’ve got this all in place, we can update our package.json
’s
script
object to include a lint
command:
"scripts": {
"lint": "eslint ."
},
You can test this by running:
$ npm run lint
We’re going to use Jest and @testing-library/react
to handle running our tests and testing our component code, so let’s install
those tools and their companion TypeScript libraries. We’ll also install
axe-core to handle some automated accessibility testing.
$ npm install --save-dev --save-exact \
@testing-library/jest-dom \
@testing-library/react \
@types/jest \
axe-core \
jest \
ts-jest
$ touch jest.config.js testSetup.ts
Our jest.config.js
collects coverage from the right places, ignores
distribution and example directories, requires the testSetup.ts
file, and sets
us up to use TypeScript in our tests.
module.exports = {
clearMocks: true,
collectCoverage: true,
collectCoverageFrom: ['<rootDir>/source/**/*.{ts,tsx}'],
coveragePathIgnorePatterns: [
'/node_modules/',
'<rootDir>/source/@types',
'stories',
],
moduleNameMapper: {},
preset: 'ts-jest',
setupFilesAfterEnv: ['<rootDir>/testSetup.ts'],
testPathIgnorePatterns: ['dist/', 'examples/'],
verbose: true,
}
And here is our testSetup.ts
file that you can use to provide global testing
tools, patch JSDOM, and more:
import '@testing-library/jest-dom/extend-expect'
All we do in testSetup.ts
is add a lot of custom matchers to the expect
function from jest via @testing-library/jest-dom
.
While we’re on the testing subject, we should also update our package.json
’s
scripts
object to include a test
command:
"scripts": {
// ...
"test": "jest"
},
We don’t have any test files yet, but you can confirm everything is set up correctly by running
$ npm run test
Storybook is a great way to not only share examples of your components but also get instant feedback while developing them, as well. It also comes with a great set of official addons.
Let’s install Storybook for React with TypeScript, and let’s also add the addons for accessibility and knobs:
$ npm install --save-dev --save-exact \
@storybook/addon-a11y \
@storybook/addon-knobs \
@storybook/preset-typescript \
@storybook/react \
babel-loader \
ts-loader
$ mkdir .storybook
$ touch .storybook/main.js
The .storybook/main.js
file is where we can specify our Storybook options:
module.exports = {
addons: [
'@storybook/addon-a11y',
'@storybook/addon-knobs',
'@storybook/preset-typescript',
],
stories: ['../source/**/*/stories.tsx'],
}
For our example component, we are going to make a circle with SVG. With only this simple component, we will cover the following aspects of component development: * TypeScript interfaces for required and optional React props * Component CSS * Testing (regular, snapshot, and accessibility) * Storybook examples
Let’s create the files we know we’re going to need:
$ mkdir source/Circle
$ touch source/Circle/index.tsx \
source/Circle/stories.tsx \
source/Circle/styles.css \
source/Circle/test.tsx
import React, { FC } from 'react'
// className, desc, and fill are optional,
// whereas title and size are required
interface Props {
className?: string
desc?: string
fill?: string
size: number
title: string
}
// we provide our Props interface to the
// function component type
const Circle: FC<Props> = ({
className = 'rl-circle',
desc,
fill,
size,
title,
}) => (
<svg
className={className}
height={size}
fill={fill}
role="img"
viewBox="0 0 100 100"
width={size}
xmlns="http://www.w3.org/2000/svg"
>
<title>{title}</title>
{desc && <desc>{desc}</desc>}
<circle cx="50" cy="50" r="50" />
</svg>
)
export default Circle
In this component file, we define the parameters that we’re willing to work
with, provide a fallback in the case of className
, and make a regular old
component.
This file should be pretty straightforward, so let’s move on to the CSS!
This is a real easy one.
.rl-circle { margin: 1em; }
The rl
is short for “react library”, and I made it up. The CSS that we are
creating needs to be made unique, and prefixing your classes is the simplest way
of doing that.
It’s time to write some tests! We’re going to make explicit expectations and do some snapshot tests so that everybody is happy.
import React from 'react'
import { render } from '@testing-library/react'
import Circle from './index'
test('with all props', () => {
const { asFragment, container, getByText } = render(
<Circle
className="class-override"
desc="A blue circle"
fill="#30336b"
size={200}
title="Water planet"
/>
)
const svgEl = container.querySelector('svg')
const titleEl = getByText('Water planet')
const descEl = getByText('A blue circle')
expect(svgEl).toHaveAttribute('height', '200')
expect(svgEl).toHaveAttribute('width', '200')
expect(titleEl).toBeInTheDocument()
expect(descEl).toBeInTheDocument()
expect(asFragment()).toMatchSnapshot()
})
test('with only title & size', () => {
const { asFragment, container, getByText } = render(
<Circle title="Water planet" size={200} />
)
const svgEl = container.querySelector('svg')
const titleEl = getByText('Water planet')
const descEl = container.querySelector('desc')
expect(svgEl).toHaveAttribute('height', '200')
expect(svgEl).toHaveAttribute('width', '200')
expect(titleEl).toBeInTheDocument()
expect(descEl).not.toBeInTheDocument()
expect(asFragment()).toMatchSnapshot()
})
These first tests provide different sets of props and test various aspects of our component based on given props’ inclusion.
Next, we can use the axe-core
tool to try our hand at accessibility testing:
import axe from 'axe-core'
// ...
test('is accessible with title, desc, size', (done) => {
const { container } = render(
<Circle desc="A blue circle" size={200} title="Water planet" />
)
axe.run(container, {}, (err, result) => {
expect(err).toEqual(null)
expect(result.violations.length).toEqual(0)
done()
})
})
test('is inaccessible without title', (done) => {
const { container } = render(
<Circle desc="A blue circle" title="Water circle" size={200} />
)
// do something very wrong to prove a11y testing works
container.querySelector('title')?.remove()
axe.run(container, {}, (err, result) => {
expect(err).toEqual(null)
expect(result.violations[0].id).toEqual('svg-img-alt')
done()
})
})
While the first test should be clear, the second test almost seems pointless
(hint: it is). I am including it here to demonstrate what a failing
accessibility scenario might look like. In reality, the first test in this group
pointed out the error in the second test, for I was originally not requiring
title
, but I was giving the SVG role="img"
. This is a no-no if there is no
aria-label
, aria-labelledby
, nor <title>
to supply the SVG with any
textual meaning.
Testing is easy if you keep things simple, and automated accessibility testing is even easier than that, for all you need to do is provide DOM elements.
I find it very difficult to do test driven development when developing
components, for it is an exploratory, creative experience for me. Instant
feedback makes it easy to run through all my bad ideas (there are many!) and
eventually land on some good ones. Storybook stories can help us do that, so
let’s make our first story in source/Circle/stories.tsx
.
import React from 'react'
import { storiesOf } from '@storybook/react'
import { withA11y } from '@storybook/addon-a11y'
import { color, number, text, withKnobs } from '@storybook/addon-knobs'
// import our component and styles from
// the distribution (build) output
import { Circle } from '../../dist/esm'
import '../../dist/css/Circle/styles.css'
// group our stories under "Circle"
const stories = storiesOf('Circle', module)
// enable the accessibility & knobs addons
stories.addDecorator(withA11y)
stories.addDecorator(withKnobs)
// add a new story and use the
// knobs tools to provide named
// defaults that you can alter
// in the Storybook interface
stories.add('default', () => (
<Circle
desc={text('desc', 'A blue circle')}
fill={color('fill', '#7ed6df')}
size={number('size', 200)}
title={text('title', 'Abstract water planet')}
/>
))
stories.add('another scenario...', () => (
<Circle {/* other example props here */} />
))
Each component gets its own stories.tsx
file, so there’s no need to worry
about them getting out of hand with all the different components in your
library. Add as many different stories for your components as you like! Our
Storybook config will collect them all for you into a single place.
We’ve already created a tsconfig.base.json
and tsconfig.json
file, and now
it’s time to add ones for CommonJS (CJS), ESModules (ESM), and Universal Module
Definitions (UMD). We will then add some NPM scripts to build out TypeScript for
us.
$ touch tsconfig.cjs.json tsconfig.esm.json tsconfig.umd.json
// tsconfig.cjs.json
{
"extends": "./tsconfig.base.json",
"compilerOptions": {
"module": "commonjs",
"outDir": "dist/cjs/"
},
"include": ["source/index.ts"]
}
// tsconfig.esm.json
{
"extends": "./tsconfig.base.json",
"compilerOptions": {
"module": "esNext",
"outDir": "dist/esm/"
},
"include": ["source/index.ts"]
}
// tsconfig.umd.json
{
"extends": "./tsconfig.base.json",
"compilerOptions": {
"module": "umd",
"outDir": "dist/umd/"
},
"include": ["source/index.ts"]
}
Each of these specify where to find the source, what type of module to output,
and where to put the resulting compiled code. If you want your code to be
compiled to the output, make sure it is either included in the include
field
or is require
d by something that is.
In our package.json
, let’s add some scripts that make use of these configs:
"scripts": {
"build:js:cjs": "tsc -p tsconfig.cjs.json",
"build:js:esm": "tsc -p tsconfig.esm.json",
"build:js:umd": "tsc -p tsconfig.umd.json",
// ...
},
Easy! If you are guessing that we might want to run these all together in a
build:js
command, there are two ways to do that (one verbose and one less so).
Our first attempt:
"scripts": {
"build:js": "npm run build:js:cjs && npm run build:js:esm && npm run build:js:umd",
// ...
},
Not bad, but we can use the npm-run-all
tool to not only write a more succinct script but also run these in parallel!
$ npm install --save-dev --save-exact npm-run-all
"scripts": {
"build:js": "run-p build:js:cjs build:js:esm build:js:umd",
// ...
},
The npm-run-all
tool gives us run-p
for running scripts in parallel and
run-s
for running them synchronously.
Watching for changes is also very simple:
"scripts": {
// ...
"build:js:esm:watch": "tsc -p tsconfig.esm.json -w",
// ...
},
While we’re here, let’s go ahead and add a clean
ing script for our dist/
directory:
"scripts": {
// ...
"clean": "clean:dist", // we'll add more here shortly
"clean:dist": "rm -rf dist",
// ...
},
Now that we can do some clean
ing and build
ing, let’s create a single build
script that we can continue adding build steps to as we go:
"scripts": {
"build": "run-s clean build:js", // we'll add more here shortly
// ...
}
Give it all whirl, if you like:
$ npm run build
You should see the following tree structure for your dist/
folder:
.
└── dist
└── cjs
└── Circle
├── index.d.js
└── index.js
├── index.d.js
└── index.js
└── esm
└── Circle
├── index.d.js
└── index.js
├── index.d.js
└── index.js
└── umd
└── Circle
├── index.d.js
└── index.js
├── index.d.js
└── index.js
We’re getting places! We have JS, and now we need our CSS.
For our styles, we have two goals:
1. output each component’s styles in a component CSS folder like dist/css/Circle/styles.css
1. output a combination of each component’s styles in a single file in dist/css/styles.css
To achieve this, we’re going to write a short bash script, and we’re going to
place it in scripts/buildCSS
.
$ mkdir scripts
$ touch scripts/buildCSS
$ chmod +x scripts/buildCSS
And in scripts/buildCSS
:
#!/bin/bash
set -euo pipefail
function copy_css {
local dir=$(dirname $0)
local component=$(basename $dir)
local dist_css=$PWD/dist/css
# concatenate component CSS to main CSS file
mkdir -p $dist_css
cat $0 >> $dist_css/styles.css
# copy component CSS to component folder
mkdir -p $dist_css/$component/
cp $0 $dist_css/$component/
}
export -f copy_css
function build {
find $PWD/source \
-name '*.css' \
-exec /bin/bash -c 'copy_css $0' {} \;
}
build
We lean on some coreutils
here to solve our problems for us. The last line of
our script, build
, calls the function of the same name that looks inside the
source
directory for all CSS files and tells the bash
program to run
copy_css
with the path to the CSS file. There’s a catch, though: bash
is
going to run in a subshell, so we need to make sure our copy_css
function is
exported and available by export -f copy_css
.
For the copy_css
function, it’s much simpler than it looks! Here are the
steps:
1. mkdir -p $dist_css
creates our output directory, dist/css
.
1. cat $0 >> $dist_css/styles.css
concatenates all the lines of our source CSS
file and appends them to dist/css/styles.css
.
1. mkdir -p $dist_css/$component/
creates a component CSS folder like
dist/css/Circle/
. We derive the $component
variable by getting the
basename
of the dirname
of our full CSS file path. For example, /Users/myuser/projects/example-component-library/source/Circle/styles.css
has a dirname
of /Users/rpearce/projects/example-component-library/source/Circle
,
and that has a basename
of Circle
! Using that deduction, we can derive
what component we’re working with and create that output directory simply by
finding a CSS file.
1. cp $0 $dist_css/$component/
copies the source component CSS file to the
output component directory; that’s it!
If you have a different CSS setup, you’ll need to adjust this build script accordingly.
Now that we have our buildCSS
script, we can add an NPM script
to handle
building this for us and add that to our build
script:
"scripts": {
"build": "run-s clean build:js build:css",
"build:css": "./scripts/buildCSS",
// ...
},
Similarly to our build:js:esm:watch
command, how might we watch for CSS
changes and run our script in a build:css:watch
command? Luckily, there’s a
tool that can help us with that: chokidar
.
$ npm install --save-dev --save-exact chokidar
"scripts": {
// ...
"build:css:watch": "chokidar \"source/**/*.css\" -c \"./scripts/buildCSS\"",
// ...
},
To develop our components and get instant feedback in our Storybook examples, we’re going to need to run a few things at once to get it all to work together.
First, let’s add a line to our package.json
’s scripts
object called
storybook
:
"scripts": {
// ...
"storybook": "start-storybook -p 6006"
},
Next, let’s add a start
command that, in this sequence,
1. cleans the dist/
directory
1. builds only the ESModule JS output
1. builds the CSS
and then, in parallel,
1. watches the JS for changes and rebuilds the ESModule output
1. watches the CSS for changes and rebuilds the CSS
1. runs storybook, which watches for changes to the prior two items, for it will
detect changes to its import
s from the dist/
folder
"scripts": {
// ...
"start": "run-s clean:dist build:js:esm build:css && run-p build:js:esm:watch build:css:watch storybook",
// ...
},
If you want to break those up into different scripts to make it more legible, here’s a way to do that:
"scripts": {
// ...
"start": "run-s start:init start:run",
"start:init": "run-s clean:dist build:js:esm build:css",
"start:run": "run-p build:js:esm:watch build:css:watch storybook",
// ...
},
You can then run this from the command line, and it should automatically open your web browser and take you to http://localhost:6006.
$ npm run start
Your Storybook library should have your component, and you can adjust the component knobs in one of the sidebars, and you can also see the accessibility audit located in the tab next to the knobs. Note: no amount of automated testing can guarantee accessibility, but it can help you catch silly mistakes.
With all these pieces in place, you can now develop your components and get instante feedback in the browser using the same code that you would provide to a consumer of your package!
Did you know that you can also build static HTML, CSS, and JavaScript files and
serve that up through something like GitHub Pages? We can update our
package.json
scripts
to include scripts for building our Storybook output
to the docs/
folder and for cleaning the docs/
folder, as well.
"scripts": {
// ...
"build:docs": "build-storybook -o docs",
"clean:docs": "rm -rf docs"
"storybook": "start-storybook -p 6006"
},
The clean:docs
script, if ran first, will guarantee that we have fresh output
in our docs/
folder. Let’s give it a go:
$ npm run clean:docs && npm run build:docs
Since we can now clean and build our Storybook folder, we can update our build
and clean
scripts accordingly:
"scripts": {
"build": "run-s clean build:js build:css build:docs",
// ...
"clean": "run-p clean:dist clean:docs",
// ...
},
When you set up a continuous integration (CI) tool for this project, it will be
tempting to tell it to simply run $ npm run build
; however, this will not
include your linting and testing scripts, and you could potentially have a green
light from CI when really you have problems!
While you could always run your linting and testing scripts inside of build
(
this can get tedious) or multiple scripts from your CI configuration, let’s
instead add another script named ci
to handle this for us:
"scripts": {
// ...
"ci": "run-p lint build test",
// ...
},
No worries! Now we can use $ npm run ci
in our CI configuration.
I recommend adding a prepublishOnly
script that ensures your linter and tests
pass before trying to build your component output:
"scripts": {
// ...
"prepublishOnly": "run-p lint test && run-p build:js build:css",
// ...
},
Also, if you want this to be a private repository, make sure you add
"private": true
to your package.json
before publishing.
Thank you for reading this, and I hope this helps you create an awesome, accessible component library.
Robert
]]>That tool can be used on its own in your tests, or you can turn it into a
Promise
and use it like this!
import axe from 'axe-core'
const isA11y = html =>
new Promise((resolve, reject) => {
axe.run(html, {}, (err, result={}) => {
const { violations=[] } = result
if (err) {
reject(err)
} else if (violations.length > 0) {
reject(violations)
} else {
// Uncomment to view incomplete/unavailable tests & why
//console.log(result.incomplete)
resolve(true)
}
})
})
test('bad form', async () => {
const wrap = document.createElement('div')
wrap.innerHTML = `
<form>
<div>Enter your name</div>
<input type="text" />
<button type="submit">Submit</button>
</form>
`
document.body.appendChild(wrap)
expect(await isA11y(wrap)).toEqual(true)
})
// Failed: Array [
// Object {
// "description": "Ensures every form element has a label",
// "help": "Form elements must have labels",
// "helpUrl": "https://dequeuniversity.com/rules/axe/3.5/label?application=axeAPI",
// "id": "label",
// "impact": "critical",
// "nodes": Array [
// [Object],
// ],
// "tags": Array [
// "cat.forms",
// "wcag2a",
// "wcag332",
// "wcag131",
// "section508",
// "section508.22.n"
// ],
// }
// ]
It can detect all sorts of accessibility issues, so long as the environment in
which it’s being tested supports the browser features used in axe-core
’s
tests. For example, jsdom
, which jest
uses as its browser mocking engine,
only recently added some support for Range
,
it seems there are still some aspects missing, and this prevents axe-core
from
being able to test things like the accessibility of text color on certain
backgrounds.
That said, the sheer number of issues that can be caught with this tool is
staggering. If you work with tools like React and combine this with Deque’s
react-axe
tool and
eslint-plugin-jsx-a11y
,
you are sure to catch heaps of issues you might accidentally overlook. Note,
however, that these tools are not replacements for real accessibility testing.
Here is an example in a real OSS project of mine that uses this axe-core
technique with @testing-library/react
:
https://github.com/rpearce/react-medium-image-zoom/blob/6721f87370d968361d9d0d14cd30d752832877d1/__tests__/Uncontrolled.js#L27.
If you are using jest
and want a custom matcher, there is a project,
jest-axe
, that allows you to do so:
// from https://github.com/nickcolley/jest-axe#usage
const { axe, toHaveNoViolations } = require('jest-axe')
expect.extend(toHaveNoViolations)
it('should demonstrate this matcher`s usage', async () => {
const render = () => '<img src="#"/>'
// pass anything that outputs html to axe
const html = render()
expect(await axe(html)).toHaveNoViolations()
})
Thank you for reading!
Robert
In this post, we will write a functional programming-style implementation of
JavaScript’s map
function that not only works with Array
but any data structure that implements a map
method. Such data structures are
known as Functors
.
Some examples of Functors
are the algebraic data types1
Maybe
and
Async
(prior knowledge of them is
not required, and out of the two, we’ll only use Maybe
).
By the end of this post, you will:
map
function that includes functions for
map
ping Array
s, Object
s, and Functor
smap
in a variety of scenarioscompose
function and use compositioncrocks
libraryThis is a big post, so buckle up! If you want to see the final product, check out this CodeSandbox: https://codesandbox.io/s/bitter-grass-tknwb.
Note: if you’re not familiar with Array.prototype.map
already, check out my
video on Using JavaScript’s Array.prototype.map
Method or my post on JavaScript:
Understand Array.prototype.map by Reimplementing It.
We will use the implementation of the map
function in
crocks as our template, so if you want to skip this
article entirely, you can go and view its
source.
map
All the Thingsmap
Functionmap
an Array
map
an Object
map
a Function
map
a Functor
throw
ing Out Bad Datamap
All the ThingsToday we are going to write a map
function that does the following:
a
and
transforms it into a value of type b
; i.e., (a -> b)
Sounds easy, right? We’ll see!
map
FunctionThere are some things we already know about our map
function:
map
(yay! nailed it!)fn
) and then some datum (m
2)3Let’s sketch it out:
const map = (fn, m) => {
// ???
}
Okay, it’s a start. This could conceivably be used like this:
map(x => x.id, [{ id: 1 }, { id: 2 }]) // [1, 2]
map(x => x.id, [{ id: 'a' }, { id: 'b' }]) // ['a', 'b']
Note the repetition of the x => x.id
. Let’s try pulling it out into a
variable:
const propId = x => x.id
map(propId, [{ id: 1 }, { id: 2 }]) // [1, 2]
map(propId, [{ id: 'a' }, { id: 'b' }]) // ['a', 'b']
Alas, that’s not much better – now we’re just repeating the variable!
Instead, what if we could store our combination of function and map
in a
variable and then use that to call with our different data? By partially
applying
the function to map
, we can!
const mapId = map.bind(null, x => x.id)
mapId([{ id: 1 }, { id: 2 }]) // [1, 2]
mapId([{ id: 'a' }, { id: 'b' }]) // ['a', 'b']
Nice! Now, let’s go back to our sketch. Let’s turn our binary function (which takes two parameters) to instead be a series of unary functions (which take one parameter4).
const map = fn => m => {
// ???
}
Wow, that was easy. By default, languages like
Haskell and
Elm automatically
curry all of
their function parameters. There are ways to automate that in
JavaScript, but
for today, we will manually curry functions by using arrow functions to
simulate it: const sum = a => b => a + b
, for example.
Lastly, on the function definition side, it would be helpful for readers of our code to understand more about the intended types. In lieu of JavaScript not having a static type checker and me not knowing TypeScript yet, we’ll do this using a Haskell-style pseudo-type signature:
map :: Functor f => (a -> b) -> f a -> f b
And we can place that as a comment above our function:
// map :: Functor f => (a -> b) -> f a -> f b
const map = fn => m => {
// ???
}
Woah, woah, woah! What’s all this? Let’s break it down.
map :: Functor f => (a -> b) -> f a -> f b
-- | | | | | |
-- 1 2 3 4 5 6
::
and before =>
in a signature is a class
constraint. This
says we’re going to use something in the type signature that obeys the
Functor Laws5, identity and composition.
The lowercase f
represents what the Functor
will be in the signature.map
ping function; e.g., x => x.id
, like we did above.->
Arrows are used in type signatures to say “then return…”. In our
map
signature, we say, “We accept a function from a
to b
then return a
function that accepts f
of a
and then return f
of b
”. If we were
summing three numbers, sum3 :: Number -> Number -> Number -> Number
, this
would read, “sum3
has the type of an expression that accepts a Number
that returns a function that accepts a Number
then returns a function that
accepts a Number
and then returns a Number
.”f a
says that a Functor
, f
, wraps some other type, a
. A concrete
example of this is [Number]
, which is a list (or Array
) of Number
s.f b
says that a Functor
, f
, wraps some other type, b
. Why isn’t it
a
? This signifies that when we take in the Functor
of any type a
, it’s
totally cool if you want to change the return type inside the Functor
. For
example, when we take [{ id: 'a' }, { id: 'b' }]
and use map
to turn that
into ['a', 'b']
, we’re taking [Object]
(a list of Object
s) and turning
that into [String]
(a list of String
s).All together now! “map
has the type of an expression where f
is a Functor
,
and it accepts a function from a
to b
, then returns a function that accepts
f
of a
, and then returns f
of b
.”
map
an Array
Let’s map
an Array
!
Remember our Functor
class constraint?
map :: Functor f => (a -> b) -> f a -> f b
Guess what? Array
is a Functor
s! How? It adheres to the laws of identity
and composition:
// identity
[1,2,3].map(x => x) // [1,2,3]
// composition
const add10 = x => x + 10
const mult2 = x => x * 2
[1,2,3].map(add10).map(mult2) // [ 22, 24, 26 ]
// is equivalent to...
[1,2,3].map(x => mult2(add10(x))) // [ 22, 24, 26 ]
// another example of the composition law
const compose = (f, g) => x => f(g(x))
mult2(add10(2)) === compose(mult2, add10)(2) // true
// and applied back to our prior example
[1,2,3].map(add10).map(mult2) // [ 22, 24, 26 ]
[1,2,3].map(x => mult2(add10(x))) // [ 22, 24, 26 ]
[1,2,3].map(compose(mult2, add10)) // [ 22, 24, 26 ]
Through map
, Array
is a Functor
. A way to quickly determine if something
is a Functor
is to ask, “Does it implement map
/ is it map
pable?”
Since we know that Array
is map
pable, we can use our map
function to check
if the f a
parameter is an Array
and then use the build in
Array.prototype.map
function to get from a
to b
:
// map :: Functor f => (a -> b) -> f a -> f b
const map = fn => m => {
if (isArray(m)) {
return mapArray(fn, m)
}
}
// isArray :: a -> Bool
const isArray = x => Array.isArray(x)
// mapArray :: ((a -> b), Array a) -> Array b
const mapArray = (fn, m) => m.map(x => fn(x))
Here, we use Array.isArray()
6 to see if the argument, m
, is an Array
,
then we call a function, mapArray
, that handles the map
ping of the Array
.
You might be thinking: why m.map(x => fn(x))
and not m.map(fn)
? As you might
remember from my article on re-implementing
Array.prototype.map
,
there are a few other arguments that the native implementation of map
provide,
as well as some potential changes to the this
keyword in your callback
function scope. Instead of allowing those to pass through, we simply take the
first argument, the currently iterated value, and send that to the callback
function7.
Now that we’ve seen the easy way to do map
with Array
, let’s see what this
would look like if we felt like implementing mapArray
ourselves:
// mapArray :: ((a -> b), Array a) -> Array b
const mapArray = (fn, m) => {
const newArray = []
for (let i = 0; i < m.length; i++) {
newArray[i] = fn(m[i])
}
return newArray
}
Not too shabby! All we do is create a new Array
and set the results of
calling the callback function with each item to its index in the new Array
and then return that Array
.
Do you think our map
function can handle an Array
of Array
s?
map(x => x * 2)([ [1,2], [3,4], [5,6] ])
// Array(3) [ NaN, NaN, NaN ]
While we can successfully iterate over the 3 items in the top-level Array
, our
callback function can’t perform operations like [1,2] * 2
! We need to do
another map
on the nested Array
s:
map(map(x => x * 2))([ [1,2], [3,4], [5,6] ])
// [ [2,4], [6,8], [10,12] ]
Well done! What else can you map
? We’re now going to leave charted waters and
venture into the unknown.
map
an Object
Let’s say we have an i18n
(short for “internationalization”) object that we’ve
been given that has a terribly annoying issue: every translation is prefixed and
suffixed with an underscore (_
)!
const i18n = {
'en-US': {
dayMode: '_Day mode_',
greeting: '_Hello!_',
nightMode: '_Night Mode_'
},
'es-ES': {
dayMode: '_Modo día_',
greeting: '_¡Hola!_'
nightMode: '_Modo nocturno_'
}
}
We could manually delete each one, or we could find and replace with our text
editor, or we could write a for
loop to do this, but because we’re super
awesome functional programmers, we’ll try to map
over the Object
and write a
function that removes the prefixed & suffixed underscores (…then we copy and
paste that? work with me here!).
Before we can do this, we need to see what happens when we call .map()
on an
Object
:
i18n['en-US'].map(x => x.slice(1))
// TypeError: i18n['en-US'].map is not a function
Oh no! If we can’t even fix the en-US
Object
, how are we supposed to fix
all of them? Let’s update our map
function to handle Object
s.
// map :: Functor f => (a -> b) -> f a -> f b
const map = fn => m => {
if (isArray(m)) {
return mapArray(fn, m)
}
if (isObject(m)) {
return mapObject(fn, m)
}
}
// isObject :: a -> Bool
const isObject = x =>
!!x && Object.prototype.toString.call(x) === '[object Object]'
// mapObject :: ((a -> b), { k: a }) -> { k: b }
const mapObject = (fn, m) => {
const obj = {}
for (const [k, v] of Object.entries(m)) {
obj[k] = fn(v)
}
return obj
}
Here, we test if something is an object by using Object.prototype.toString
and make sure to .call(x)
instead of just .toString(x)
, for this reason:
Object.prototype.toString(null)
// "[object Object]"
Object.prototype.toString.call(null)
// "[object Null]"
Object.prototype.toString([])
// "[object Object]"
Object.prototype.toString.call([])
// "[object Array]"
Object.prototype.toString.call({})
// "[object Object]"
We then use our new mapObject
function, whose signature is
mapObject :: ((a -> b), { k: a }) -> { k: b }
mapObject
takes a function from a
to b
and an Object
with a key(s) and
some value(s), a
, and returns an Object
with a key(s) and some value(s) b
.
In short, it maps the values of an Object
. Our mapObject
function is
nothing more than a for
loop over each value returned from
Object.entries()
!
It calls the callback function with each value and returns a new object with the
same key and a new, updated value.
Let’s try it out:
const i18n = {
'en-US': {
dayMode: '_Day mode_',
greeting: '_Hello!_',
nightMode: '_Night Mode_'
},
'es-ES': {
dayMode: '_Modo día_',
greeting: '_¡Hola!_'
nightMode: '_Modo nocturno_'
}
}
map(x => x.slice(1, -1))(i18n['en-US'])
// {
// dayMode: 'Day mode',
// greeting: 'Hello!',
// nightMode: 'Night Mode'
// }
Okay – what about our entire i18n
object?
map(map(x => x.slice(1, -1)))(i18n)
// {
// 'en-US': {
// dayMode: 'Day mode',
// greeting: 'Hello!',
// nightMode: 'Night Mode'
// },
// 'es-ES': {
// dayMode: 'Modo día',
// greeting: '¡Hola!',
// nightMode: 'Modo nocturno'
// }
// }
Since we’re dealing with nested objects, we need to use map
on an Object
inside an Object
. We pass a nested map
ping function, and our little
underscore problem is gone!
map
a Function
Remember our functions mult2
and add10
from before?
const add10 = x => x + 10
const mult2 = x => x * 2
What would happen if we used those as the arguments to our map
function and
wanted them to be automatically composed together so that we can then provide a
value later?
map(add10)(mult2) // undefined
map(add10)(mult2)(12) // TypeError: map(...)(...) is not a function
Time for our map
function to handle a Function
as the second argument and
compose
the two functions together:
// map :: Functor f => (a -> b) -> f a -> f b
const map = fn => m => {
if (isArray(m)) {
return mapArray(fn, m)
}
if (isObject(m)) {
return mapObj(fn, m)
}
if (isFunction(m)) {
return compose(fn, m)
}
}
// isFunction :: a -> Bool
const isFunction = x => typeof x === 'function'
// compose :: ((b -> c), (a -> b)) -> a -> c
const compose = (f, g) => x => f(g(x))
And when we run our previously failed code again,
map(add10)(mult2) // function compose(x)
map(add10)(mult2)(12) // 44
we can see that calling map
with two functions returns a composition of those
two functions, and calling that result with a primitive value (12
) gives us
back our result, 44
.
map
a Functor
When we learned about map
ping Array
s before, we learned that Array
s are
Functor
s because they adhere to the laws of identity and composition;
i.e., they are map
pable.
There are all sorts of other data structures that implement a map
method, just
like Array.prototype
does, and we want to be able to handle those, too!
We currently have all the tools required to implement map
for Functor
s
without even knowing how they might work! All we need to know is, “Does it
implement map
as a Function
?” Let’s see what we can come up with!
// map :: Functor f => (a -> b) -> f a -> f b
const map = fn => m => {
if (isFunction(m)) {
return compose(fn, m)
}
if (isArray(m)) {
return mapArray(fn, m)
}
if (isFunctor(m)) {
return mapFunctor(fn, m)
}
if (isObject(m)) {
return mapObj(fn, m)
}
}
// isFunction :: a -> Bool
const isFunction = x => typeof x === 'function'
// isFunctor :: a -> Bool
const isFunctor = x => !!x && isFunction(x['map'])
// mapFunctor :: Functor f => ((a -> b), f a) -> f b
const mapFunctor = (fn, m) => m.map(fn)
That is surprisingly simple, isn’t it? We use our isFunction
check from before
to test if m
has a map
property that is a Function
, then we call map
on
m
and pass it the callback Function
in mapFunctor
.
You might be thinking that mapArray
and mapFunctor
could use the same handler
because Array
s are Functors
, and you are correct; however, because of the
extra implementation bits that come back from Array.prototype.map
, we’ll keep
them separate and only call the callback to Array.prototype.map
with the
currently iterated item. Here’s the difference:
// mapArray :: ((a -> b), Array a) -> Array b
const mapArray = (fn, m) => m.map(x => (fn(x))
// mapFunctor :: Functor f => ((a -> b), f a) -> f b
const mapFunctor = (fn, m) => m.map(fn)
If you don’t care about this, it’s totally acceptable to not include the Array
bits at all and use the Functor
map
8 to handle the map
ping of Array
s,
since they’re Functor
s.
To test our Functor
map
ping, we’ll use crocks to
provide us access to an algebraic data type called
Maybe
.
import { compose, option, prop } from 'crocks'
const company = {
name: 'Pearce Software, LLC',
locations: [
'Charleston, SC, USA',
'Auckland, NZ',
'London, England, UK'
]
}
prop('foo', company) // Nothing
prop('locations', company) // Just [String]
option([], prop('foo', company))
// []
option([], prop('locations', company))
// [
// 'Charleston, SC, USA',
// 'Auckland, NZ',
// 'London, England, UK'
// ]
const getLocations = compose(option([]), prop('locations'))
getLocations(company)
// [
// 'Charleston, SC, USA',
// 'Auckland, NZ',
// 'London, England, UK'
// ]
Pump the breaks! What’s all this Just
and Nothing
stuff? We’re not going to
focus on Maybe
s today9, but the short version is that the locations
property
may or may not be present in the object, so we encapsulate that uncertainty
inside of a Maybe
algebraic data type via the prop
function, and we provide
a default value via the option
function that the Maybe
can fall back to in
the event of not being able to find locations
.
Why does this matter? We want to map
a Maybe
, and the prop
function will
give us access to one. Let’s see what it looks like:
import { compose, option, prop } from 'crocks'
const upcase = x => x.toUpperCase()
const getLocations =
compose(option([]), map(map(upcase)), prop('locations'))
getLocations({}) // []
getLocations(company)
// [
// 'CHARLESTON, SC, USA',
// 'AUCKLAND, NZ',
// 'LONDON, ENGLAND, UK'
// ]
Okay, cool! But why are we map
ping twice?
When we work with algebraic data types like Maybe
, instead of writing if (dataIsValid) doSomething
, the map
method on a Maybe
gives us access to
the value inside the Maybe
(our locations
), but it does so only if the data
is available.
Once we have access to the locations
, we then use map
again to uppercase
each location.
throw
ing Out Bad DataWhat happens if the arguments passed to map
aren’t a Function
and a
Functor
?
map(null)([1,2,3]) // TypeError: fn is not a function
map(x => x * 2)(null) // undefined
map(null)(null) // undefined
I think we can provide some more helpful messaging to guide users of our map
tool on how to use it correctly.
// map :: Functor f => (a -> b) -> f a -> f b
const map = fn => m => {
if (!isFunction(fn)) {
throw new TypeError(`map: Please provide a Function for the first argument`)
}
// ...our other handlers...
throw new TypeError(`map: Please provide a Functor or Object for the second argument`)
}
map(null)([1,2,3]) // TypeError: map: Please provide a Function for the first argument
map(x => x * 2)(null) // TypeError: map: Please provide a Functor or Object for the second argument
map(null)(null) // TypeError: map: Please provide a Function for the first argument
Now, when we provide bad arguments, we’re told exactly what we need to do.
Congratulations and thank you for making it to the end! If you want to play around with what we created, check out this CodeSandbox: https://codesandbox.io/s/bitter-grass-tknwb.
Here is our code from today in its entirety:
const { compose, option, prop } = require('crocks')
// map :: Functor f => (a -> b) -> f a -> f b
const map = fn => m => {
if (!isFunction(fn)) {
throw new TypeError(`map: Please provide a Function for the first argument`)
}
if (isFunction(m)) {
return compose(fn, m)
}
if (isArray(m)) {
return mapArray(fn, m)
}
if (isFunctor(m)) {
return mapFunctor(fn, m)
}
if (isObject(m)) {
return mapObj(fn, m)
}
throw new TypeError(`map: Please provide a Functor or Object for the second argument`)
}
// we're opting for crocks' compose, instead
// compose :: ((b -> c), (a -> b)) -> a -> c
// const compose = (f, g) => x => f(g(x))
// isArray :: a -> Bool
const isArray = x => Array.isArray(x)
// isFunction :: a -> Bool
const isFunction = x => typeof x === 'function'
// isFunctor :: a -> Bool
const isFunctor = x => !!x && isFunction(x['map'])
// isObject :: a -> Bool
const isObject = x =>
!!x && Object.prototype.toString.call(x) === '[object Object]'
// mapArray :: ((a -> b), Array a) -> Array b
const mapArray = (fn, m) => {
const newArray = []
for (let i = 0; i < m.length; i++) {
newArray.push(fn(m[i]))
}
return newArray
}
// realistically, you should use this mapArray:
// const mapArray = (fn, m) => m.map(x => fn(x))
// mapObj :: (a -> b) -> { k: a } -> { k: b }
const mapObj = (fn, m) => {
const obj = {}
for (const [k, v] of Object.entries(m)) {
obj[k] = fn(v)
}
return obj
}
// mapFunctor :: Functor f => ((a -> b), f a) -> f b
const mapFunctor = (fn, m) => m.map(fn)
Thank you for reading!
Robert
https://github.com/hemanth/functional-programming-jargon#algebraic-data-type ↩︎
Wondering why the data comes last? Check out Brian Lonsdorf’s “Hey Underscore, You’re Doing It Wrong!” talk. The tl;dr is that you should arrange your arguments from least likely to change to most likely to change in order to pave the way for partial application and greater code reuse. ↩︎
https://github.com/hemanth/functional-programming-jargon#arity ↩︎
https://github.com/hemanth/functional-programming-jargon#functor ↩︎
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/isArray ↩︎
Check out ramda.js’ addIndex
function
to see a different pattern for working with indices and Array
s. ↩︎
If you’re an egghead.io subscriber, Andy Van Slaars has a great course, Safer JavaScript with the Maybe Type, or you can check out a Haskell article on The Functor class. ↩︎
Array.prototype.map
function in order to not only understand map
better but also to get an idea of
how to implement instance methods on Array.prototype
.
If you’d prefer to see a ~5 minute recording of what we’ll do in this post, you can watch the video below; otherwise, carry on!
map
to Convert Film Data to HTML StringsFirst, we will start with some code that will demonstrate one way to take an array of films and output certain HTML strings.
Here is the films
array:
// films :: [Film]
const films = [
{ title: `Pulp Fiction`, score: 8.9 },
{ title: `Forrest Gump`, score: 8.8 },
{ title: `Interstellar`, score: 8.6 },
{ title: `The Prestige`, score: 8.5 }
]
and here is the output we are going for:
[
'<li class="film">#1 Pulp Fiction: <b>8.9</b></li>',
'<li class="film">#2 Forrest Gump: <b>8.8</b></li>',
'<li class="film">#3 Interstellar: <b>8.6</b></li>',
'<li class="film film--last">#4 The Prestige: <b>8.5</b></li>'
]
Let’s take a closer look at that output. We can see that the following data
needs to be included for each item:
* position in the list (#3
)
* title
(Interstellar
)
* score
(8.6
)
* CSS class of film
, unless it is the last item, in which case it gets film
and film--last
Here is the (somewhat unusual) implementation we will use today in order to
later test that we successfully reimplemented Array.prototype.map
:
// filmToHtml :: (Film, Index, Films) -> HtmlString
function filmToHtml(film, i, films) {
return this.format({
index: i + 1,
isLast: i === films.length - 1,
score: film.score,
title: film.title,
})
}
function format({ index, isLast, score, title }) {
const cn = isLast ? `film film--last` : `film`
return `<li class="${cn}">#${index} ${title}: <b>${score}</b></li>`
}
console.log(
films.map(filmToHtml, { format })
)
// [
// '<li class="film">#1 Pulp Fiction: <b>8.9</b></li>',
// '<li class="film">#2 Forrest Gump: <b>8.8</b></li>',
// '<li class="film">#3 Interstellar: <b>8.6</b></li>',
// '<li class="film film--last">#4 The Prestige: <b>8.5</b></li>'
// ]
This is probably two-to-three times more complicated than it needs to be, but it
is a sufficient example for today, for we make use of all of
Array.prototype.map
’s features.
Note: it’s rare to use the second argument to map
, but we are doing so today
in order to test our implementation.
So what is going on here?
The map
method iterates over each film and calls filmToHtml
with a few
arguments:
films
arrayIt also calls the filmToHtml
function with an optional this
scope. To
demonstrate how this works, we pass an object with the method format
that
filmToHtml
then accesses via this.format
. The format
function then
receives some data points and ultimately returns to us the <li>...</li>
HTML
for each film.
map
Method, mappy
If we want to write a new method that can be called on our films
Array
instance, we add it to the Array.prototype
like this:
Array.prototype.mappy = function mappy(/* ??? */) {
// our implementation will go here
}
Since a method is a function defined on an object, we know we are working with a function, but what arguments does our function accept?
map
’s Syntax?As hinted at in a prior section, if we look at MDN’s Array.prototype.map
syntax documentation,
we can see that we need:
callback
that gets called with an optional scope and 3 arguments:
map
is called uponthis
when calling the callbackLet’s give our mappy
method a callback
parameter, as well as an optional
thisArg
, which we’ll simply name _this
.
Array.prototype.mappy = function mappy(callback, _this) {
// Let's then have it return our array instance
// by returning the special `this` keyword.
return this
}
console.log(
films.map(filmToHtml, { format })
)
// [
// { title: `Pulp Fiction`, score: 8.9 },
// { title: `Forrest Gump`, score: 8.8 },
// { title: `Interstellar`, score: 8.6 },
// { title: `The Prestige`, score: 8.5 }
// ]
Since our mappy
method, like map
, will not alter the original array, we know
we’ll need to return a new array, so let’s do that and return the empty array:
Array.prototype.mappy = function mappy(callback, _this) {
const newArray = []
return newArray
}
console.log(
films.map(filmToHtml, { format })
)
// []
Now that we have a newArray
, know we can work with this
, have a callback
to call and a _this
scope to call the callback
with, we can populate the
newArray
with the result of calling the callback
function with each item in
our array (and with the appropriate arguments, of course):
Array.prototype.mappy = function mappy(callback, _this) {
const newArray = []
// We'll use a for loop to iterate over
// each item in our list,
for (let i = 0; i < this.length; i++) {
// and then at the end of our `newArray`
// we'll append the result of calling
// the callback function with the optional
// scope and its 3 arguments:
// 1. the item,
// 2. the current item's index in the array,
// 3. and lastly the original list, itself.
newArray.push(
callback.call(_this, this[i], i, this)
)
}
// Ultimately, we return the `newArray`
// containing our transformed items.
return newArray
}
// And when we log out the result,
// we can see our `filmToHtml` function
// works as expected.
console.log(
films.map(filmToHtml, { format })
)
// [
// '<li class="film">#1 Pulp Fiction: <b>8.9</b></li>',
// '<li class="film">#2 Forrest Gump: <b>8.8</b></li>',
// '<li class="film">#3 Interstellar: <b>8.6</b></li>',
// '<li class="film film--last">#4 The Prestige: <b>8.5</b></li>'
// ]
What happens if someone tries to use our mappy
method but doesn’t provide a
callback function? For example:
films.mappy(123)
// TypeError: callback.call is not a function
films.map(123)
// TypeError: 123 is not a function
Unfortunately, our mappy
method doesn’t take this scenario into account! But
the map
method’s error messaging isn’t totally clear at a glance, either, so
let’s try a different approach:
Array.prototype.mappy = function mappy(callback, _this) {
if (typeof callback !== 'function') {
throw new TypeError(
'Array.prototype.mappy: ' +
'A callback function was expected ' +
'as the first argument, but we received ' +
'`' + JSON.stringify(callback) + '`'
)
}
const newArray = []
for (let i = 0; i < this.length; i++) {
newArray.push(
callback.call(_this, this[i], i, this)
)
}
return newArray
}
films.mappy(123)
// TypeError:
// Array.prototype.mappy: A callback function was
// expected as the first argument, but we received `123`
films.mappy({ foo: 'bar' })
// TypeError:
// Array.prototype.mappy: A callback function was
// expected as the first argument, but we received `{"foo":"bar"}`
I hope this post has helped de-mystify how Array.prototype.map
conceptually
works under the hood! Next time, we’ll look at how to implement map
without
polluting the Array.prototype
, and we might even be able to use map
on more
data structures than just Array
! Stay tuned.
Thank you for reading!
Robert
tl;dr => I’ve release v4 of react-medium-image-zoom, and you should consider using it for zooming images. Check out the Storybook Examples to see it in action.
react-medium-image-zoom
I wrote the first version of react-medium-image-zoom
in 2016 in a 6m x 6m flat in London that my (now) wife and I lived in. At the
time, I had been enamored with medium.com’s image zooming
and wanted to share that with the React.js masses, so I
wrote the first implementation on nights and weekends, and once it published,
it was quickly added to projects at my day job.
Since then, react-medium-image-zoom
has 22 All Contributors, has reached up to
50k downloads per month, is used by 638 open source projects on GitHub,
has 49 dependent packages on NPM, and has over 708 stars on GitHub.
While that might not be staggering to anyone, that means the world to me –
somebody else found value in something I made and put out into the world for
free!
Over the past 3.5 years, a number of issues were opened to ask for bug fixes, features and general questions, and there have even been a few pull requests, too! I am so grateful for all the effort put in by others to help me help them solve their issues.
A point was eventually reached, however, where there were bugs that were unfixable with the implementation of the component, and the codebase was not something I wanted to work with any more.
I knew it could be simpler! I knew it could be more accessible!
react-medium-image-zoom
v4Here is what using react-medium-image-zoom
looks like now.
First, you import the default, uncontrolled component and the static CSS file:
import Zoom from 'react-medium-image-zoom'
import 'react-medium-image-zoom/dist/styles.css'
And then you go about your day adding zooming capabilities to your images:
<Zoom>
<img
alt="that wanaka tree"
src="/path/to/thatwanakatree.jpg"
width="500"
/>
</Zoom>
Did I mention that you can now zoom anything you like?
// <picture>
<Zoom>
<picture>
<source
media="(max-width: 800px)"
srcSet="/path/to/teAraiPoint.jpg"
/>
<img
alt="that wanaka tree"
src="/path/to/thatwanakatree.jpg"
width="500"
/>
</picture>
</Zoom>
// <figure>
<figure>
<Zoom>
<img
alt="that wanaka tree"
src="/path/to/thatwanakatree.jpg"
width="500"
/>
</Zoom>
<figcaption>That Wanaka Tree</figcaption>
</figure>
// <div> that looks like a circle
<Zoom>
<div
aria-label="A blue circle"
style={{
width: 300,
height: 300,
borderRadius: '50%',
backgroundColor: '#0099ff'
}}
/>
</Zoom>
If you find that you want to use the library as a controlled
component, you
import the Controlled
component like this:
import { Controlled as Zoom } from 'react-medium-image-zoom'
And then you dictate whether or not it should be zoomed and provide a callback for the library to give you hints about when you should probably zoom or unzoom based on events like clicks and scrolling:
<Zoom
isZoomed={true}
onZoomChange={isZoomed => { console.log({ isZoomed }) }}
>
<img
alt="that wanaka tree"
src="/path/to/thatwanakatree.jpg"
width="500"
/>
</Zoom>
react-medium-image-zoom
?naturalWidth
and naturalHeight
so we don’t try
to zoom anything when it’s already at its maximum dimensions. This would also
re-enable the ability to not zoom beyond a source image’s natural dimensions
once zoomed.requestAnimationFrame
, etc.)Thank you for reading this and for having an interest in react-medium-image-zoom
!
If you’d like to contribute to the project, need help, or have constructive
feedback, please open an issue on the react-medium-image-zoom
issue
tracker.
Thank you for reading!
Robert
This is part 5 of a multipart series where we will look at getting a website / blog set up with hakyll and customized a fair bit.
Out of the box, hakyll takes filenames and dates and outputs nice routes for
your webpages, but what if you want your routes to be based off of a metadata
field like title
? In this post we’ll take a title like "Hakyll Pt. 5 – Generating Custom Post Filenames From a Title Slug"
and have hakyll output
routes like "hakyll-pt-5-generating-custom-post-filenames-from-a-title-slug"
.
route
FunctionidRoute
, setExtension
and Other Routes
Functions for
CluesmetadataRoute
to Access Title
Metadataroute
FunctionIn the hakyll tutorial on basic
routing,
as well as other posts in this series, we have come across hakyll’s
route
function used in conjunction with functions like
idRoute
and
setExtension
.
Given these functions live in the
Hakyll.Core.Routes
module, we can bet that other functions for customizing our outputted routes
will be found in there. Let’s see what we can find!
idRoute
, setExtension
and Other Routes
Functions for CluesWhen we look at
Hakyll.Core.Routes
,
we can see that
idRoute
and
setExtension
,
which we know are used with
route
,
both return a type of Routes
. The implementation of Routes
is not important
for us here, for our job now is to see what other functions return Routes
,
as well, so that we can potentially leverage their functionality.
Doing a quick search in that module reveals to us some very interesting results!
* customRoute
* constRoute
* gsubRoute
* metadataRoute
Alright! Now, what does each one do?
* customRoute
: takes in a function that accepts an Identifier
and returns a
FilePath
and returns that. Sounds like it could be useful, somehow… Let’s
keep going.
* constRoute
: takes in a FilePath
, wraps the value in a const
function
(which will always return the value it was passed) and then passes the
function to customRoute
! Okay, so this basically means if we say
constRoute "foo.html"
, then that’s what the route will come out as. Makes
sense.
* gsubRoute
: this one’s purpose is to use patterns to replace parts of routes
(like transforming "tags/rss/bar.xml"
to tags/bar.xml
). Useful! But not
for our task.
* metadataRoute
: takes in a function that accepts Metadata
and returns
Routes
, and then this function returns Routes
. Since we want to access our
title
metadata to create a route, something that gives us access to
Metadata
and returns Routes
is exactly what we want!
metadataRoute
to Access Title MetadataAs with most things in the Haskell world, let’s allow the types to guide us.
What do we know?
* route
accepts a function whose return value is Routes
* metadataRoute
ultimately returns Routes
(yay!), but it first takes in a
function that accepts Metadata
and needs to return Routes
.
Therefore, our task is to write a function with the signature
Metadata -> Routes
that finds the title
field in the metadata, converts it
to a URI slug, and transforms that FilePath
into a Routes
. Perhaps we could
call it titleRoute
and then extract the conversion from Metadata
to
FilePath
to something like fileNameFromTitle
? Good enough.
Also, what did we see earlier that can take a FilePath
and return Routes
?
constRoute
to the rescue! With these initial bits figured out, let’s sketch
this out :
main :: IO ()
main = hakyllWith config $ do
match "posts/*" $ do
let ctx = constField "type" "article" <> postCtx
route $ metadataRoute titleRoute -- THIS LINE
compile $ pandocCompilerCustom
>>= loadAndApplyTemplate "templates/post.html" ctx
>>= saveSnapshot "content"
>>= loadAndApplyTemplate "templates/default.html" ctx
-- ...other rules
titleRoute :: Metadata -> Routes
titleRoute = constRoute . fileNameFromTitle
fileNameFromTitle :: Metadata -> FilePath
fileNameFromTitle = undefined -- ???
Great! This is progress! We have the outline of what we need to accomplish. The
next task is to find the title
, convert it to a slug and return a FilePath
.
But first, we need to take a detour and write a toSlug
function that we can
work with.
Taking inspiration from the archived project https://github.com/mrkkrp/slug, we
can write a module, Slug.hs
, with a main function, toSlug
that takes in
Text
from Data.Text
and transforms it from normal text to a slug. For
example, "This example isn't good"
would be transformed into
"this-example-isnt-good"
.
{-# LANGUAGE OverloadedStrings #-}
module Slug (toSlug) where
import Data.Char (isAlphaNum)
import qualified Data.Text as T
keepAlphaNum :: Char -> Char
keepAlphaNum x
| isAlphaNum x = x
| otherwise = ' '
clean :: T.Text -> T.Text
clean =
T.map keepAlphaNum . T.replace "'" "" . T.replace "&" "and"
toSlug :: T.Text -> T.Text
toSlug =
T.intercalate (T.singleton '-') . T.words . T.toLower . clean
Once you do this, don’t forget to open up your project’s .cabal
file, add in
this line and run stack build
eventually:
executable site
-- ...
other-modules: Slug
Now that this is taken care of, let’s return to the remaining task!
The last step in our journey is to look up the title
in the Metadata
,
convert it to a slug and return a FilePath
. Let’s look at the implementation
and then talk about it:
titleRoute :: Metadata -> Routes
titleRoute =
constRoute . fileNameFromTitle
fileNameFromTitle :: Metadata -> FilePath
fileNameFromTitle =
T.unpack . (`T.append` ".html") . toSlug . T.pack . getTitleFromMeta
getTitleFromMeta :: Metadata -> String
getTitleFromMeta =
fromMaybe "no title" . lookupString "title"
getTitleFromMeta
: use Metadata
’s
lookupString
function to search for title
and handle the Maybe String
return value by
providing a fallback of "no title"
fileNameFromTitle
: once we get the title
String
, convert it to type
Text
, pass that to the slugify function, append .html
to the slugified
title
, then convert it back to a String
(FilePath
is a type alias of
String
, so no worries here)titleRoute
: once we have a FilePath
value, we pass it to constRoute
to
get back our Routes
type that metadataRoute
requires, and we’re done!While it would be awesome if this sort of thing were built in to hakyll, this experience has shown me that in a way, the core of hakyll allows people to customize their build to their heart’s delight, and perhaps an implementation such as this would be useful as a hakyll plugin. Maybe!
Next up: Pt. 6 – Pure Builds With Nix
Thank you for reading!
Robert
You will inevitably need to copy static files over to your build folder at some point in a hakyll project, and this short tutorial will show you a simple way to do so.
As of the time of this writing, the default hakyll example for copying files looks like this:
match "images/*" $ do
route idRoute
compile copyFileCompiler
This is great and gets the job done! When I first looked at copying more files, I went down this path:
match "CNAME" $ do
route idRoute
compile copyFileCompiler
match "robots.txt" $ do
route idRoute
compile copyFileCompiler
match "images/*" $ do
route idRoute
compile copyFileCompiler
match "fonts/*" $ do
route idRoute
compile copyFileCompiler
-- ...and so on
Obviously, there is some code duplication here; there must be a better way!
Here are all the items I need copied over:
CNAME
robots.txt
_config.yml
images/*
fonts/*
.well-known/*
As it turns out, this list of file identifiers to copy can be used in
conjunction with forM_
to take some foldable structure (for us, a list), map each element to a monadic
action that uses hakyll’s match
function, ignore the results and ultimately
simplify our code.
The type signature for forM_
is as follows:
forM_ :: (Foldable t, Monad m) => t a -> (a -> m b) -> m ()
And here is the implementation:
forM_ [ "CNAME"
, "robots.txt"
, "_config.yml"
, "images/*"
, "fonts/*"
, ".well-known/*"
] $ \f -> match f $ do
route idRoute
compile copyFileCompiler
Nice! While this technique is not mentioned in the documentation, it is present
in the hakyll website’s site.hs
file,
so we know we’re in good company if jaspervdj is
already using it.
If you want to read more about the possible patterns that can be matched, check out the commentary in the source here: https://github.com/jaspervdj/hakyll/blob/1abdeee743d65d96c6f469213ca6e7ea823340a7/lib/Hakyll/Core/Identifier/Pattern.hs.
In this /r/haskell reddit thread
by GAumala, they point out that hakyll’s
pattern composition operators
can also be used to accomplish the same goal. Here is how we would could convert
our forM_
above to instead use .||.
:
match ("CNAME"
.||. "favicon.ico"
.||. "robots.txt"
.||. "_config.yml"
.||. "images/*"
.||. "fonts/*"
.||. ".well-known/*") $ do
route idRoute
compile copyFileCompiler
While I understand the forM_
better, this does seem to be more attractive!
If you’re using GitHub pages and have any dotfiles or dotfolders to copy over, make sure you pay attention here.
Let’s say you have signed up for Brave Payments and need to verify your site by placing a file at:
https://mysite.com/.well-known/brave-payments-verification.txt
Unfortunately, GitHub Pages, which uses jekyll under the hood, will ignore your dotfiles and dotfolders by default and will therefore not deploy them.
We can fix this by adding a _config.yml
file to our project (you can see it
included in the list in the previous section) and telling it to include what it
is ignoring:
# _config.yml
include: [".well-known"]
Once you’ve done this, you can commit this file, push it up to GitHub and view it on your published site.
You can read more about jekyll’s configuration options here: https://jekyllrb.com/docs/configuration/options/.
Today we learned a simple way to list what files we want to be copied over in
our hakyll projects, got exposed to forM_
and uncovered a potential issue
with dotfiles and dotfolders not getting published on GitHub Pages.
Next up:
Thank you for reading!
Robert
updated
FieldThere is already a great starter guide at https://jaspervdj.be/hakyll/tutorials/05-snapshots-feeds.html, so be sure to read this first – it might make it so you don’t have to read this blog post at all.
Thankfully, hakyll aready comes with prebuilt RSS and Atom templates! You can find the source here: https://github.com/jaspervdj/hakyll/tree/master/data/templates. While you won’t need to copy and paste nor even directly use these files, you should look them over to see what fields they are expecting. There are two levels to be aware of: the feed itself and each individual feed item.
The feed itself is looking for the following, and you’ll provide these through
a FeedConfiguration
that we’ll discuss in a moment. Here are the fields the
atom.xml
and rss.xml
templates are expecting:
title
(title of feed)description
(description of feed)authorName
(feed author name)authorEmail
(feed author email)root
(your website)updated
(feed last updated at; should be done for you)body
(feed body; should be done for you)url
(path to the XML file; based off of a create ["rss.xml"]
function
that we’ll discuss)Each feed item, or entry, expects the following:
title
(title of the entry)root
(your website)url
(path to resource)published
(published date; "%Y-%m-%dT%H:%M:%SZ"
format; should be done for
you via hakyll’s dateField
context)updated
(updated date; "%Y-%m-%dT%H:%M:%SZ"
format; should be done for
you, unless you provide you own)Now that you know what sort of data are expected, let’s begin.
As is introduced in the required hakyll feed
reading, we need
to create a FeedConfiguration
. If you’d like to see the FeedConfiguration
data constructor, you can view it here: https://github.com/jaspervdj/hakyll/blob/f3a17454fae3b140ada30ebef13f508179f4cd0d/lib/Hakyll/Web/Feed.hs#L63-L75.
feedConfiguration :: FeedConfiguration
feedConfiguration =
FeedConfiguration
{ feedTitle = "My Blog"
, feedDescription = "Posts about x, y & z"
, feedAuthorName = "My Name"
, feedAuthorEmail = "me@myemail.com"
, feedRoot = "https://example.com"
}
We should next figure out what we want our “feed context” to consist of. The official hakyll feed guide (linked above) is:
let feedCtx = postCtx `mappend` bodyField "description"
-- which can be abbreviated to
let feedCtx = postCtx <> bodyField "description"
This will enable you to include the body of your post as the description
, but
if you provide your own description
field in your posts, then this step isn’t
necessary. For the mean time, let’s make our own feedCtx
function that sticks
to the original post.
feedCtx :: Context String
feedCtx = postCtx <> bodyField "description"
If you’re unsure of what postCtx
is, I recommend checking out the previous
article or viewing the source
of this site: https://github.com/rpearce/robertwpearce.com/blob/858163216f445eb8b6ab3b4304b022b64814b6f8/site.hs#L131-L136.
Here is what the official hakyll feed guide recommends:
create ["atom.xml"] $ do
route idRoute
compile $ do
let feedCtx = postCtx `mappend` bodyField "description"
posts <- fmap (take 10) . recentFirst =<<
loadAllSnapshots "posts/*" "content"
renderAtom myFeedConfiguration feedCtx posts
This is great! However, if we want to generate both an atom.xml
feed and an
rss.xml
feed, we’ll end up with almost duplicated code:
create ["rss.xml"] $ do
route idRoute
compile $ do
let feedCtx = postCtx `mappend` bodyField "description"
posts <- fmap (take 10) . recentFirst =<<
loadAllSnapshots "posts/*" "content"
renderRss myFeedConfiguration feedCtx posts
It looks like all the feed compilation is exactly the same except for the
renderAtom
and renderRss
functions that come bundled with hakyll. With this
in mind, let’s write our own feed compiler and reduce as much boilerplate as we
reasonably can.
To start out, let’s see what we want our top-level end result to be:
create ["atom.xml"] $ do
route idRoute
compile (feedCompiler renderAtom)
create ["rss.xml"] $ do
route idRoute
compile (feedCompiler renderRss)
While we could potentially abstract this further, this leaves wiggle room for
customizing the route
for whatever reason you may want to.
This feedCompiler
is a function that we need to write that will house the
missing logic. Let’s look at its type:
feedCompiler :: FeedConfiguration
-> Context String
-> [Item String]
-> Compiler (Item String)
-> Compiler (Item String)
The first 4 parameters describe the types of both renderAtom
and renderRss
(they’re the same). For reading’s sake, let’s set those to a type alias called
FeedRenderer
:
type FeedRenderer =
FeedConfiguration
-> Context String
-> [Item String]
-> Compiler (Item String)
And now we can define our feed but do it in a slightly cleaner way:
feedCompiler :: FeedRenderer -> Compiler (Item String)
feedCompiler renderer =
renderer feedConfiguration feedCtx
=<< fmap (take 10) . recentFirst
=<< loadAllSnapshots "posts/*" "content"
Thanks to Abhinav Sarkar on lobste.rs, I was pointed to a pull request, https://github.com/jaspervdj/hakyll/pull/652, that allows hakyll users to use their own feed templates. Here is some example usage from the PR:
customRenderAtom :: FeedConfiguration -> Context String -> [Item String] -> Compiler (Item String)
customRenderAtom config context items = do
atomTemplate <- unsafeCompiler $ readFile "templates/atom.xml"
atomItemTemplate <- unsafeCompiler $ readFile "templates/atom-item.xml"
renderAtomWithTemplates atomTemplate atomItemTemplate config context items
If you’ve made it this far and have successfully generated and published your
atom.xml
and/or rss.xml
files, see if they’re valid! Head to
https://validator.w3.org/feed/ and see if yours validate.
You can check out your new feed in an RSS/Atom feed reader such as the browser plugin FeedBro or any others.
updated
FieldI ran into a feed validation problem where, in a few posts, I manually set the
updated
field to a date – not datetime – and thus invalidated my feed. The
value 2017-06-30
needed to be in the "%Y-%m-%dT%H:%M:%SZ"
format, or
2017-06-30T00:00:00Z
. This led me down a rabbit hole that ended in me
essentially repurposing the dateField
code from hakyll (https://github.com/jaspervdj/hakyll/blob/c85198d8cb6ce055c788e287c7f2470eac0aad36/lib/Hakyll/Web/Template/Context.hs#L273-L321).
While I tried to use parseTimeM
and formatTime
from Data.Time.Format
in my own way, I couldn’t make it as simple as I wanted, thus leading to me
giving up and using what was already there. Here’s what I did:
feedCtx :: Context String
feedCtx =
updatedField <> -- THIS IS NEW
postCtx <>
bodyField "description"
updatedField :: Context String
updatedField = field "updated" $ \i -> do
let locale = defaultTimeLocale
time <- getUpdatedUTC locale $ itemIdentifier i
return $ formatTime locale "%Y-%m-%dT%H:%M:%SZ" time
getUpdatedUTC :: MonadMetadata m => TimeLocale -> Identifier -> m Clock.UTCTime
getUpdatedUTC locale id' = do
metadata <- getMetadata id'
let tryField k fmt = lookupString k metadata >>= parseTime' fmt
maybe empty' return $ msum [tryField "updated" fmt | fmt <- formats]
where
empty' = fail $ "Hakyll.Web.Template.Context.getUpdatedUTC: " ++ "could not parse time for " ++ show id'
parseTime' = parseTimeM True locale
formats =
[ "%a, %d %b %Y %H:%M:%S %Z"
, "%Y-%m-%dT%H:%M:%S%Z"
, "%Y-%m-%d %H:%M:%S%Z"
, "%Y-%m-%d"
, "%B %e, %Y %l:%M %p"
, "%B %e, %Y"
, "%b %d, %Y"
]
Woah! We need to break down what’s happening here.
feedCtx
The addition to feedCtx
is before our postCtx
because of the mappend
precedence of what comes out of the pipeline with the value updated
. We want
first rights to transforming the updated
field, so it needs to come first.
updatedField
This function is a Context
that leans on hakyll’s field
function to say that
we want to work with the updated
field and then do some Monad stuff with
time. The tl;dr is that we send the field’s current value off in order to get a
UTCTime
value back, and then we format it to be the way we need it.
getUpdatedUTC
It’s really not as bad as it looks! The root of this function does two things:
updated
value in the metadataIf it can’t do these things, it simply fail
s.
Yes, I could have simply written my updated
field in the correct format. But
where’s the fun in that? I would hate for my feed to silently invalidate itself
over something so simple!
Whew! We dove in to generating Atom & RSS XML feeds with hakyll, uncovered a
nice refactor opportunity via feedCompiler
, learned how to validate our feeds
and ultimately learned about how a seemingly harmless updated
date could
prevent us from having a totally valid feed!
Next up:
Thank you for reading!
Robert
A sitemap.xml template, just like the templates in the last
post,
receives context fields to work with (variables, essentially), and outputs the
result of applying said context to the template. Here is what our sitemap
template will look like today in our project’s templates/sitemap.xml
:
<?xml version="1.0" encoding="UTF-8"?>
<urlset
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
xmlns:news="http://www.google.com/schemas/sitemap-news/0.9"
xmlns:xhtml="http://www.w3.org/1999/xhtml"
xmlns:mobile="http://www.google.com/schemas/sitemap-mobile/1.0"
xmlns:image="http://www.google.com/schemas/sitemap-image/1.1"
xmlns:video="http://www.google.com/schemas/sitemap-video/1.1"
>
<url>
<loc>$root$</loc>
<changefreq>daily</changefreq>
<priority>1.0</priority>
</url>
$for(pages)$
<url>
<loc>$root$$url$</loc>
<lastmod>$if(updated)$$updated$$else$$if(date)$$date$$endif$$endif$</lastmod>
<changefreq>weekly</changefreq>
<priority>0.8</priority>
</url>
$endfor$
</urlset>
Apart from the normal sitemap boilerplate, you can see root
, pages
, url
,
date
and updated
context fields. While date
and updated
would come from your metadata fields defined for a post, and the url
is built
from hakyll’s defaultContext
,
the root
and pages
fields are custom defined in what will be our very own
sitemapCtx
context. In the next section, we’ll use this template to generate
our sitemap.xml file.
If you create a hakyll project from scratch,
you will start out with a few files that we can add to our sitemap:
* index.html
* about.rst
* contact.markdown
* posts/2015-08-12-spqr.html
* posts/2015-10-07-rosa-rosa-rosam.html
* posts/2015-11-28-carpe-diem.html
* posts/2015-12-07-tu-quoque.html
You should note that your site.hs
file also has the following:
main :: IO ()
main = hakyllWith config $ do
-- ...
match (fromList ["about.rst", "contact.markdown"]) $ do
route $ setExtension "html"
compile $ pandocCompiler
>>= loadAndApplyTemplate "templates/default.html" defaultContext
match "posts/*" $ do
route $ setExtension "html"
compile $ pandocCompiler
>>= loadAndApplyTemplate "templates/post.html" postCtx
>>= loadAndApplyTemplate "templates/default.html" postCtx
It’s important that you understand that any files you want to be loaded and sent
to templates/sitemap.xml
must first be match
ed and compile
d before the
sitemap can be built. If you don’t do this, you’ll pull your hair out wondering
why the file (or folder) you’re trying to include in the sitemap never shows up.
Now, there is something that we are going to emulate to make this sitemap a reality
(this should already be in site.hs
):
main :: IO ()
main = hakyllWith config $ do
-- ...
create ["archive.html"] $ do
route idRoute
compile $ do
posts <- recentFirst =<< loadAll "posts/*"
let archiveCtx =
listField "posts" postCtx (return posts) `mappend`
constField "title" "Archives" `mappend`
defaultContext
makeItem ""
>>= loadAndApplyTemplate "templates/archive.html" archiveCtx
>>= loadAndApplyTemplate "templates/default.html" archiveCtx
Reading the code above, this essentially says
1. here’s a file we want to create that does not yet exist (how create
differs from match
)
1. when you create the route, keep the filename (what idRoute
does)
1. when you compile, load all the posts, specify what the context to send
to each template will be, then make the item (the ""
is an identifier…
see the source
for more), then pass the context to the archive template and pass that on to
the default template, ultimately building up a full webpage from the
inside-out
Let’s change this 3-step rule to suit our needs before we wrangle the code. We
want our rules to say:
1. here’s a file we want to create that does not yet exist (sitemap.xml
)
1. when you create the route, keep the filename (what idRoute
does)
1. when you compile, load all the posts, load all the other pages, specify
what the context to send to each template will be, then make the item, then
pass the context to the sitemap template, ultimately building up an XML file
This is almost the same! Let’s write it:
main :: IO ()
main = hakyllWith config $ do
-- ...
create ["sitemap.xml"] $ do
route idRoute
compile $ do
-- load and sort the posts
posts <- recentFirst =<< loadAll "posts/*"
-- load individual pages from a list (globs DO NOT work here)
singlePages <- loadAll (fromList ["about.rst", "contact.markdown"])
-- mappend the posts and singlePages together
let pages = posts <> singlePages
-- create the `pages` field with the postCtx
-- and return the `pages` value for it
sitemapCtx = listField "pages" postCtx (return pages)
-- make the item and apply our sitemap template
makeItem ""
>>= loadAndApplyTemplate "templates/sitemap.xml" sitemapCtx
This is starting to look good! But what’s wrong here? Remember the root
context bits? We’re going to need to define what that is, and the best way that
I’ve found right now is simply as a String
; if you want to do something fancy
with configuration or reading it in dynamically, then go nuts.
root :: String
root = "https://ourblog.com"
With that defined, we can add it to our contexts:
main :: IO ()
main = hakyllWith config $ do
-- ...
create ["sitemap.xml"] $ do
route idRoute
compile $ do
posts <- recentFirst =<< loadAll "posts/*"
singlePages <- loadAll (fromList ["about.rst", "contact.markdown"])
let pages = posts <> singlePages
sitemapCtx =
constField "root" root <> -- here
listField "pages" postCtx (return pages)
makeItem ""
>>= loadAndApplyTemplate "templates/sitemap.xml" sitemapCtx
-- ...
postCtx :: Context String
postCtx =
constField "root" root <> -- here
dateField "date" "%Y-%m-%d" <>
defaultContext
Hint: if the <>
is throwing you for a loop, it’s defined as the same as thing
as mappend
.
See how we defined constField "root" root
in two places? We’re talking about
two different contexts here: the sitemap context and the post context. While
you could have the postCtx
be combined with the sitemapCtx
, thus giving the
pages
field access to the root
field, you probably want to use root
(and
perhaps other constants) wherever you work with posts, so adding them to
postCtx
for use everywhere seems like the right thing to do.
Once you’ve got all this, run the following to build (or rebuild) your
docs/sitemap.xml
file:
1. λ stack build
1. λ stack exec site clean
1. λ stack exec site build
Your docs/sitemap.xml
should now have all your pages defined in it!
We’ve done some epic traveling in New Zealand and now want to include a bunch of
pages we’ve written in the sitemap. Those pages are:
* new-zealand/index.md
* new-zealand/otago/index.md
* new-zealand/otago/dunedin-area.md
* new-zealand/otago/queenstown-area.md
* new-zealand/otago/wanaka-area.md
First, we make sure that our pages get compiled (we’ll use postCtx
for them):
main :: IO ()
main = hakyllWith config $ do
-- ...
match "new-zealand/**" $ do
route $ setExtension "html"
compile $ pandocCompiler
>>= loadAndApplyTemplate "templates/post.html" postCtx
>>= loadAndApplyTemplate "templates/default.html" postCtx
And then we want to make sure we add them to our create
function:
main :: IO ()
main = hakyllWith config $ do
-- ... match code up here
create ["sitemap.xml"] $ do
route idRoute
compile $ do
posts <- recentFirst =<< loadAll "posts/*"
singlePages <- loadAll (fromList ["about.rst", "contact.markdown"])
nzPages <- loadAll "new-zealand/**" -- here
let pages = posts <> singlePages <> nzPages -- here
sitemapCtx =
constField "root" root <>
listField "pages" postCtx (return pages)
makeItem ""
>>= loadAndApplyTemplate "templates/sitemap.xml" sitemapCtx
I could not figure out how to mix globs (new-zealand/**
) in with individual
file paths (included in fromList
), so I had to load them separately; if you
figure out how, let me know!
Once you’ve got all this, run the following to rebuild your docs/sitemap.xml
file:
1. λ stack build
1. λ stack exec site rebuild
In this lesson we learned how to dynamically generate a sitemap.xml file using hakyll. Next time, we’ll use these same skills to generate our own RSS and Atom XML feeds.
Next up: * Pt. 3 – Generating RSS and Atom XML Feeds * Pt. 4 – Copying Static Files For Your Build * Pt. 5 – Generating Custom Post Filenames From a Title Slug * (wip) Pt. 6 – Customizing Markdown Compiler Options
Thank you for reading!
Robert
While this is detailed fully on the hakyll installation tutorial, I will repeat it here.
$HOME/.local/bin
is included in your PATH
λ stack install hakyll
– should install hakyll-init
in $HOME/.local/bin
λ hakyll-init ourblog.com
λ cd ourblog.com
λ stack init
λ stack build
λ stack exec site build
λ stack exec site rebuild
– to test the rebuild commandλ stack exec site watch
– starts dev server & watches for changesHakyll gives you the ability to override its existing configuration
rules to
change anything from the output directory (default _site/
) to deploy commands
to the host and port for previewing your site locally.
Here is what the default configuration looks like in hakyll (source):
-- | Default configuration for a hakyll application
defaultConfiguration :: Configuration
defaultConfiguration = Configuration
{ destinationDirectory = "_site"
, storeDirectory = "_cache"
, tmpDirectory = "_cache/tmp"
, providerDirectory = "."
, ignoreFile = ignoreFile'
, deployCommand = "echo 'No deploy command specified' && exit 1"
, deploySite = system . deployCommand
, inMemoryCache = True
, previewHost = "127.0.0.1"
, previewPort = 8000
}
where
ignoreFile' path
| "." `isPrefixOf` fileName = True
| "#" `isPrefixOf` fileName = True
| "~" `isSuffixOf` fileName = True
| ".swp" `isSuffixOf` fileName = True
| otherwise = False
where
fileName = takeFileName path
The hakyll tutorial on rules, routes and
compilers
makes reference to a hakyllWith
function for customizing configuration, so
let’s see how we can use that.
The default hakyll main
function in your site.hs
file looks like this:
main :: IO ()
main = hakyll $ do
What we can do is change hakyll
to hakyllWith
and pass a function that
we’ll name config
that makes use of the defaultConfiguration
but returns a
new, altered record:
main :: IO ()
main = hakyllWith config $ do
-- ...
config :: Configuration
config = defaultConfiguration
{ destinationDirectory = "docs"
, previewPort = 5000
}
Whenever we make a change to site.hs
, we need to make sure we use stack
to
build
it again and restart our server. We’ll also need to make sure we clean
out our old output folder with the clean
command. So, all together now:
λ stack exec site clean
λ stack build
λ stack exec site watch
…and now your output will be in the docs/
folder, and your site will be
previewable at http://localhost:5000.
Now that we’ve flexed our configuration muscles a bit, let’s look at the
posts/
folder to see what we’re working with on the blog side.
If you open the posts/
folder and select any preset blog post (hint: you can
see them online at
https://github.com/jaspervdj/hakyll/tree/master/data/example/posts; make sure
you click the “Raw” button to view the raw markdown), you’ll see a standard
markdown file containing two sets of content:
* metadata (between the ---
delimiters)
* body content (everything else)
From http://localhost:5000, let’s click on the first post we see:
http://localhost:5000/posts/2015-12-07-tu-quoque.html. If we open up the
corresponding file, 2015-12-07-tu-quoque.html
, in our text editor, we can see
there are two metadata fields: title
and author
. Let’s change them:
---
title: Some Latin Text
author: Some Roman Person
---
Refresh the page and see the changes!
But note that despite changing the title of your blog post, the outputted HTML file is still located at http://localhost:5000/posts/2015-12-07-tu-quoque.html. This is because the markdown filename is what currently determines the outputted filename. We will change this in Part 5 of this series, but until then, if you change the title of your post, it would be a good idea to also change the filename.
Feel free to edit these metadata fields and markdown content with your own blog post material.
Next up, we’ll see about how we can customize the templates to work with all the metadata that we might want to include from our posts (description, author, keywords, image, etc).
There is a hakyll turorial on templates, context and control flow that you should check out. Here, we’re going to adjust the default templates to suit our needs.
The HTML templates can be found in – you guessed it – the templates/
folder.
The first file we will look at is templates/default.html
(hint: this template
is also viewable online at
https://github.com/jaspervdj/hakyll/blob/master/data/example/templates/default.html).
Templates are nothing more than .html
files but with a caveat (which you’d
know about if you read the tutorial above): there is added context – drawn from
markdown options or injected before compilation in site.hs
– that can be
used anywhere, so long as it is between $
(dollar signs). Here is an example
that uses the title
property that is set in each file:
<title>$title$</title>
Cool! Now what if we wanted to use our author
metadata?
<meta name="author" content="$author$">
Oh no!
Compiling
updated templates/default.html
[ERROR] Missing field $author$ in context for item about.rst
This is because not all of our files being run through this default template have all the same fields. We can use conditionals to solve this:
$if(author)$<meta name="author" content="$author$">$endif$
<!-- or, if you prefer -->
$if(author)$
<meta name="author" content="$author$">
$endif$
Blog posts also should have a description
and keywords
, so let’s add those:
to posts/2015-12-07-tu-quoque.markdown
:
---
title: My Blog Post
description: This is my great blog post
keywords: blog, first blog, best blog evar
author: I did it!
---
We’ll then update our default template to handle those, as well:
<title>$title$</title>
$if(author)$<meta name="author" content="$author$">$endif$
$if(keywords)$<meta name="keywords" content="$keywords$">$endif$
If you refresh http://localhost:5000/posts/2015-12-07-tu-quoque.html and open up
the web inspector, you’ll now see that the <head>
now contains not only your
post’s title
, but also all the other fields you specified!
There are many other possibilities for this, as well. For instance, if you
wanted to have different og:type
s of pages, you could do:
$if(type)$
<meta property="og:type" content="$type$">
$else$
<meta property="og:type" content="website">
$endif$
Check out the default template for this website here: https://github.com/rpearce/robertwpearce.com/blob/main/src/templates/default.html.
Lastly for today, what if we want to reuse templates and specify where they should be rendered from other templates? Enter hakyll partials.
A common use of partials is for navigation across different templates. We can
add a new file, templates/nav.html
, and place the following in it (add some
CSS classes and styling if you want it to look nice):
<nav class="nav">
<a href="/">Home</a>
<a href="mailto:me@myemail.com">Email Me</a>
</nav>
Now, this partial can be used anywhere. For example, from
templates/post.html
:
$partial("templates/nav.html")$
In this lesson we learned how to get started with
hakyll and learned some of the ways for us to get
started customizing it to our own needs. Next time, we’ll dive into site.hs
to generate our own sitemap.xml
file.
Next up:
Until next time,
Robert
The map
,
filter
and reduce
methods on Array.prototype
are essential to adopting a functional programming
style in JavaScript, and in this post we’re going to examine how to use these
three concepts with ramda.js.
If you are unfamiliar with these three concepts, then be sure to first read the MDN documentation on each (linked above).
Pre-requisite ramda posts:
Other ramda posts:
This is the test data set we will reference throughout the post:
const films = [
{ title: 'The Empire Strikes Back', rating: 8.8 },
{ title: 'Pulp Fiction', rating: 8.9 },
{ title: 'The Deer Hunter', rating: 8.2 },
{ title: 'The Lion King', rating: 8.5 }
]
There are a few conditions that are required for us to meet our goal. We must
construct a function that:
* only selects those with an 8.8 rating or higher
* returns a list of the selected titles interpolated in an HTML string that
has this structure:
html <div>TITLE: <strong>SCORE</strong></div>
Given these requirements, a pseudotype signature for this might be:
// `output` takes in a list of films
// and returns a list of HTML strings
//
// output :: [Film] -> [Html]
films.map(film => `<div>${film.title}, <strong>${film.rating}</strong></div>`)
// => [
// "<div>The Empire Strikes Back, <strong>8.8</strong></div>",
// "<div>Pulp Fiction, <strong>8.9</strong></div>",
// "<div>The Deer Hunter, <strong>8.2</strong></div>",
// "<div>The Lion King, <strong>8.5</strong></div>"
// ]
Try this code in the ramda REPL
map
Callback// filmHtml :: Film -> Html
const filmHtml = film =>
`<div>${film.title}, <strong>${film.rating}</strong></div>`
films.map(filmHtml)
Try this code in the ramda REPL
filter
Out Lower Scoresfilms
.filter(x => x.rating >= 8.8)
.map(filmHtml)
// => [
// "<div>The Empire Strikes Back, <strong>8.8</strong></div>",
// "<div>Pulp Fiction, <strong>8.9</strong></div>",
// ]
Try this code in the ramda REPL
But wait! We can extract that filter
callback, as well:
// hasHighScore :: Film -> Bool
const hasHighScore = x =>
x.rating >= 8.8
films
.filter(hasHighScore)
.map(filmHtml)
Try this code in the ramda REPL
filter
and map
We can use ramda’s function currying capabilities and function composition to create some very clear and concise pointfree functions.
import { compose, filter, map } from 'ramda'
// output :: [Film] -> [Html]
const output =
compose(map(filmHtml), filter(hasHighScore))
output(films)
Try this code in the ramda REPL
One thing to remember with ramda functions (like map
and filter
) is that
ramda typically orders arguments from least likely to change to most likely to
change. Callback/transformation functions here are passed as the first
argument, and the data comes last. To understand this further, check out the
following links:
If we want to not only reuse our filtering and mapping functions but also make
them more readable, we can pull out the pieces that make up our output
function into smaller bits:
// filmsToHtml :: [Film] -> [Html]
const filmsToHtml =
map(filmHtml)
// highScores :: [Film] -> [Film]
const highScores =
filter(hasHighScore)
// output :: [Film] -> [Html]
const output =
compose(filmsToHtml, highScores)
output(films)
Try this code in the ramda REPL
reduce
We can accomplish the same goals as filter
and map
by making use of
reduce
.
films.reduce((acc, x) => {
return hasHighScore(x)
? acc.concat(filmHtml(x))
: acc
}, [])
// or, for better performance
films.reduce((acc, x) => {
if (hasHighScore(x)) {
acc.push(filmHtml(x))
}
return acc
}, [])
Try this code in the ramda REPL
If you’re not familiar with reduce, be sure to play with the live example to better understand how those pieces work before moving on.
It’s also worth noting that you can do just about anything in JavaScript with
the reduce
function. I highly recommend going through Kyle Hill’s
slides on reduce Is The Omnifunction.
But wait! We can extract the reduce
callback like we did with map
and
filter
before:
// highScoresHtml :: ([Html], Film) -> [Html]
const highScoresHtml = (acc, x) =>
hasHighScore(x)
? acc.concat(filmHtml(x))
: acc
films.reduce(highScoresHtml, [])
Try this code in the ramda REPL
import { reduce } from 'ramda'
const output =
reduce(highScoresHtml, [])
output(films)
Try this code in the ramda REPL
As before with map
& filter
, output
can be reused over and over again and
passed any set of films to generate HTML for. To further understand the
parameter order used here, check out the docs for ramda’s
reduce
.
This step-by-step process we’ve walked through is as close to real-life refactoring/rethinking as I could do in a post. Thanks for making it this far.
Until next time,
Robert
One of the most prevalent causes of bugs I’ve seen in latter-day JavaScript
revolves around expectations with regard to data modeling. With the rise of
react, redux, et al, many of us
store our application state in an object whose keys and hierarchy can easily
change, leaving us sometimes with or without values that were in fact expected:
for example, undefined is not a function
or trying to call .map(...)
on a
non-mappable data type (such as null
or undefined
). While there are any
number of solutions for this issue that might even include diving into algebraic
data types, the ramda library gives us a few
helper methods that we can use right away to dig into our data structures and
extract values:
Other ramda posts:
prop
& propOr
What happens normally if you expect an array, try to access the third item
(index position 2), but are actually provided undefined
instead of an array?
const arr = undefined
arr[2] // TypeError is thrown
What happens if you try to access the length
property on what you think should
be an array but ends up being null
or undefined
?
const arr = null
arr.length // TypeError is thrown
One solution is to do the “value or default” approach to keep the errors at bay:
const arr = undefined
const xs = arr || []
xs[2] // undefined
xs.length // 0
An approach we could take to avoid the errors being thrown would be to use
ramda’s prop
helper:
import prop from 'ramda/src/prop'
const arr = undefined
prop(2, arr) // undefined
prop('length', arr) // undefined
Ramda’s length
function would accomplish
a similar goal for prop('length')
.
But if we want a default to be returned in lieu of our data not being present,
we can turn to propOr
:
import propOr from 'ramda/src/propOr'
const arr = undefined
propOr({}, 2, arr) // {}
propOr(0, 'length', arr) // 0
If you need to select multiple properties without fear, then the
props
or
pick
functions may be for you.
path
& pathOr
What if we are working in a deeply nested data structure where multiple keys in
our hierarchy may or may not exist? Enter path
and pathOr
. These work similarly to prop
and propOr
except that they use an array syntax to dive into data structures
and ultimately check for a value, whereas the prop
family checks for a
property’s presence.
import path from 'ramda/src/path'
const data = {
courses: {
abc123: {
title: 'How To Build a Tiny House',
dueAt: '2018-01-30'
}
}
}
// getCourseTitle :: String -> String | undefined
const getCourseTitle = courseId =>
path(['courses', courseId, 'title'])
getCourseTitle('abc123')(data) // "How To Build a Tiny House"
getCourseTitle('def456')(data) // undefined
Try this code in the ramda REPL
Or if we’d always like to default to a value, we can use pathOr
:
import pathOr from 'ramda/src/path'
const data = {
courses: {
abc123: {
title: 'How To Build a Tiny House',
dueAt: '2018-01-30'
}
}
}
// getCourseTitle :: String -> String
const getCourseTitle = courseId =>
pathOr('My Course', ['courses', courseId, 'title'])
getCourseTitle('abc123')(data) // "How To Build a Tiny House"
getCourseTitle('def456')(data) // "My Course"
Try this code in the ramda REPL
As I said before, there are many different ways to solve this problem, but
I’ve found the propOr
and pathOr
family of ramda functions to be a great
starting point.
Until next time,
Robert
Composition is defined as “the combining of distinct parts or elements to form a whole.” source If we apply this thinking to functions in programming, then function composition can be seen as the combining of functions to form a new function that is composed of said functions. Now that that word salad is over, let’s get to work.
We have a task, and our task is to write a function that
1. accepts a list of objects containing score
(Number
) and name
(String
)
properties
1. returns the top 3 scorers’ names from highest to lowest
Here are the unordered results that we have to work with:
const results = [
{ score: 40, name: 'Aragorn' },
{ score: 99, name: 'Bilbo' },
{ score: 63, name: 'Celeborn' },
{ score: 77, name: 'Denethor' },
{ score: 100, name: 'Eowin' },
{ score: 94, name: 'Frodo' }
]
Other ramda posts:
// getHighScorers :: [Object] -> [String]
const getHighScorers = xs =>
[...xs]
.sort((a, b) => b.score - a.score)
.slice(0, 3)
.map(x => x.name)
getHighScorers(results) // => [ 'Eowin', 'Bilbo', 'Frodo' ]
As cautious JavaScript developers, we know to reach for our functions and
methods that don’t mutate the objects we’re receiving. We use a copy of the
original list and chain together operations that sort
, slice
and map
the
return values of each operation until we arrive at [ 'Eowin', 'Bilbo', 'Frodo' ]
.
Many folks would stop here, write a few unit tests and be done with it. We, on
the other hand, will take this to the next level.
Our getHighScorers
function has some functionality that we may want to use
elsewhere in the future. Let’s break down what we might be able to extract:
sort
)slice
)map
)// Altered slightly to allows us to compare
// things like strings and numbers.
//
// descBy :: (String, [a]) -> [a]
const descBy = (prop, xs) =>
[...xs].sort((a, b) =>
a[prop] < b[prop] ? 1 : (a[prop] === b[prop] ? 0 : -1)
)
// takeN :: (Number, [a]) -> [a]
const takeN = (n, xs) =>
xs.slice(0, n)
// mapProp :: (String, [a]) -> [b]
const mapProp = (prop, xs) =>
xs.map(x => x[prop])
// 1. pass `score` and `xs` to `descBy`
// 2. pass the return value of `descBy`
// to `takeN(3, __)`
// 3. pass the return value of `takeN`
// to `mapProp('name', __)` where we map over
// the list and pull out each one's `name`
//
// getHighScorers :: [Object] -> [String]
const getHighScorers = xs =>
mapProp('name', takeN(3, descBy('score', xs)))
// results object here...
getHighScorers(results) // => [ 'Eowin', 'Bilbo', 'Frodo' ]
Try this code in the ramda REPL.
This is starting to look good, but that getHighScorers
function is looking a
bit dense. Since we have a seeming pipeline of transformations that we’re
applying to a list, wouldn’t it be great if we could simply list these
transformations in a “flat” way (instead of a “nested” way like we do above) and
then pass the data to this list of transformations?
compose
Let’s take our getHighScorers
function and rewrite it using ramda’s compose
function:
import compose from 'ramda/src/compose'
// const getHighScorers = xs =>
// mapProp('name', takeN(3, descBy('score', xs)))
// getHighScorers :: [Object] -> [String]
const getHighScorers = xs =>
compose(mapProp('name'), takeN(3), descBy('score'))(xs)
Let’s first clarify what compose
is doing:
compose(f, g)(x) === f(g(x))
Say it aloud: “f after g.” With compose
, the function furthest to the right
is applied first with the value (x
), and the return value of that function is
passed to the next function to its left, and repeat this until all functions
have been applied.
Cool – but wait! How can descBy
, takeN
and mapProp
only accept one
argument at a time when they all accept two?! In order to make these a reality,
we can make use of ramda’s curry function
which we dove into in my previous post on function currying.
import compose from 'ramda/src/compose'
import curry from 'ramda/src/curry'
// descBy :: String -> [a] -> [a]
const descBy = curry((prop, xs) =>
[...xs].sort((a, b) =>
a[prop] < b[prop] ? 1 : (a[prop] === b[prop] ? 0 : -1)
)
)
// takeN :: Number -> [a] -> [a]
const takeN = curry((n, xs) =>
xs.slice(0, n)
)
// mapProp :: String -> [a] -> [b]
const mapProp = curry((prop, xs) =>
xs.map(x => x[prop])
)
// getHighScorers :: [Object] -> [String]
const getHighScorers =
compose(mapProp('name'), takeN(3), descBy('score'))
Try this code in the ramda REPL.
You may also notice that we removed xs =>
from getHighScorers
because when
we use compose and pass the final argument in at the end, it in fact becomes
redundant. Our composition sits and waits for either the data to be applied or
for it to be used another way: more compositions! This leads us down a
powerful path whereby we can now compose different functions together and
combine them into a final composition.
// getTop3 :: [a] -> [a]
const getTop3 =
compose(takeN(3), descBy('score'))
// getHighScorers :: [Object] -> [String]
const getHighScorers =
compose(mapProp('name'), getTop3)
Try this code in the ramda REPL.
This is where we truly begin to see the power of compose
, for we are able to
break our functions or function compositions out into tiny little pieces that we
chain together like water pipes or guitar pedals.
We are now empowered (nay – encouraged!) to provide meaningful names in the context of what we’re trying to accomplish.
Composing compositions also allows us to use our type signatures to tell a story about what behavior is expected with each little part on our path to the ultimate goal.
pipe
vs compose
For various reasons that are usally a matter of opinion, many people prefer
function application to flow from left to right instead of right to left
(the latter being what you get with compose
). So if you find yourself thinking
the same thing, pipe is for you:
// <------------- <------ <-------------
compose(mapProp('name'), takeN(3), descBy('score'))(xs)
// versus
// --------------> -------> -------------->
pipe(descBy('score'), takeN(3), mapProp('name'))(xs)
There’s really nothing to it! Instead of compose
or pipe
, use
composeP or
pipeP.
Once you adopt this pattern, you may find it initially difficult to inspect your data at a given point in the pipeline; however, here’s a tip that will solve most of your problems:
compose(
mapProp('name'),
x => (console.log(x), x),
takeN(3),
descBy('score')
)
// or
const logIt = x => (console.log(x), x)
compose(
mapProp('name'),
logIt,
takeN(3),
descBy('score')
)
This logs whatever the value in the pipeline is at that time and returns that value to pass it on just as it would have.
Thanks for reading! Until next time,
Robert
Functional Programming concepts have been pouring into the JavaScript community for a number of years now, and many of us struggle to keep up. I’ve been lucky enough to be able to work with some mentors and functional tools that have helped me along the way. One of these tools is ramda.js, and it was my gateway to the larger Functional Programming world. I hope it will be for you, as well.
To understand ramda, you first have to understand a concept known as “currying.” The ramda website states,
The parameters to Ramda functions are arranged to make it convenient for currying.
There are some function currying articles on the ramda site, such as Favoring Curry and Why Curry Helps by Scott Sauyet, which are great for explaining the benefits and power of currying. Those articles (and many other resources) do great jobs of explaining how to use currying and why, so I’ll briefly touch on those points, but I really want to focus on how it works under the hood and how this funny little concept will completely change the way that you program.
Other ramda posts:
Many articles already cover this, so I’ll keep it short.
Let’s start with a function that takes two numbers and adds them together:
// add :: (Number, Number) -> Number
const add = (a, b) =>
a + b
As our fake type signature describes, add
takes two arguments (essentially, a
tuple)
that are both of type Number
and returns a value of type Number
.
But if we wanted to create a function that adds 10
to anything, we could write
the following:
// add :: Number -> Number -> Number
const add = a => b =>
a + b
// which is the same as
function add(a) {
return function(b) {
return a + b
}
}
// and then
// add10 :: Number -> Number
const add10 = add(10)
add10 // => Function
add10(4) // => 14
Note the change in type signature: we now have singular arguments that are accepted at a time instead of the tuple style. When we provide the first argument, we are then returned a function that will sit and wait until all the functions are applied before giving us a value. This method can be useful in many situations, but consider the following:
add(10)(4)
That feels awkward, right? Fear not! There is a way.
curry
FunctionRamda provides us a function named curry
that will take what might be
considered a “normal” JavaScript function definition with multiple parameters
and turn it into a function that will keep returning a function until all of
its parameters have been supplied. Check it out!
import curry from 'ramda/src/curry'
const oldAdd = (a, b) =>
a + b
const add = curry(oldAdd)
add(10) // => Function
add(10)(4) // => 14
add(10, 4) // => 14
Or if you want to have curry
baked in to your original add
function:
// add :: Number -> Number -> Number
const add = curry((a, b) ->
a + b
)
The magical curry
function doesn’t care when you provide arguments or how you
do so – it will just keep returning you partially applied functions until all
arguments have been applied, at which point it will give you back a value.
curry
Work?This might seem blasphemous, but to understand how curry
works under the hood,
we’re going to dive into a different library’s implementation of it:
crocks by
@evilsoft. (Crocks is similar to ramda but dives
more into abstract data types (ADTs) and is more towards the deeper end of the
Functional Programming pool.) I think crocks’ implementation is excellent, and
99% of it being in one file makes for a great teaching tool.
If you want to jump ahead, here is a link to crocks’ curry
function:
https://github.com/evilsoft/crocks/blob/master/src/core/curry.js
Where do we start with understanding this next-level JavaScript? Always start with the types, as they can tell a story.
curry
’s StoryWhat does this tell us?
// curry :: ((a, b, c) -> d) -> a -> b -> c -> d
((a, b, c) -> d)
tells us that it accepts a function that has n
parameters of any type and returns a value of any type-> a -> b -> c
tells us that it then accepts each parameter – but only 1 at
a time!-> d
tells us that it ultimately returns the value as specified in the
functionSounds simple, right? Easier said than done!
// curry :: ((a, b, c) -> d) -> a -> b -> c -> d
//
// 1. we accept a function
const curry = (fn) => {
// 2. we return a function taking any `n` arguments
return (...xs) => {
// make sure we have a populated list to work with;
// `undefined` is the value for the Unit type in
// crocks and calling our function must utilize some
// sort of value.
const args =
xs.length ? xs : [ undefined ]
// if the number of args sent are
// less than that required, then
// don't do more work; go ahead and
// return a new version of our function
// that is still waiting for more
// arguments to be applied.
if (args.length < fn.length) {
// way of safely creating a new function
// and binding arguments to it without
// calling it.
return curry(Function.bind.apply(fn, [ null ].concat(args)))
}
// if we've provided all arguments,
// then let's apply them and give
// back the result.
//
// otherwise, let's do some work
// and see if, based on the number
// of arguments, we return a new
// function with fewer arguments
// or go ahead and call the function
// with the final argument so we can
// get back a value.
//
// NOTE: `applyCurry` is defined below.
const val =
args.length === fn.length
? fn.apply(null, args)
: args.reduce(applyCurry, fn)
// 3. if our value is still a function, then
// let's return the curried version of our
// function that still needs some arguments
// to be applied and repeat everything above.
//
// otherwise, we're all done here, so
// let's return the value.
return isFunction(val)
? curry(val)
: val
}
}
const applyCurry = (fn, arg) => {
// return whatever we received if
// fn is actually NOT a function.
if (!isFunction(fn)) { return fn }
// if we have more than 1 argument
// remaining to be applied, then let's
// bind a value to the next argument and
// keep going.
//
// otherwise, then yay let's go ahead
// and call that function with the argument;
// our `[ undefined ]` default saves us from
// some potential headache here.
return fn.length > 1
? fn.bind(null, arg)
: fn.call(null, arg)
}
const isFunction = x =>
typeof x === 'function'
With all of these checks in here, we can now run the following code and have it all work:
const add = curry((a, b) => a + b)
add // => Function
add(1) // => Function
add(1)(2) // => 3
add(1, 2) // => 3
add(1, 2, 99) // => 3 (we don't care about the last one!)
add(1, 2, 99, 2000) // => 3 (we don't care about the last two!)
curry
In ActionIf all of your functions are curried, you can start writing code that you never would have been able to before. Here is a small taste that we will cover more fully in a future Ramda Chops:
// addOrRemove :: a -> Array -> Array
const addOrRemove = x =>
ifElse(
contains(x),
without(of(x)),
append(x)
)
// addOrRemoveTest :: Array -> Array
const addOrRemoveTest =
addOrRemove('test')
addOrRemoveTest([ 'thing' ]) // => ["thing", "test"]
addOrRemoveTest([ 'thing', 'test' ]) // => ["thing"]
(View this example in a live REPL)
The addOrRemove
function almost reads like English: “If something contains
x
, give me back that something without x
; otherwise, append x
to that
something.” What is worth understanding here is that these functions each accept
a number of arguments where the most generic/reusable are provided
first (this is a tenet of Functional Programming). Here, we are able to create
a very reusable function with partially applied values that sits and waits until
the final bit – an array – is provided.
Thanks for reading! Until next time,
Robert
If you’d like to code along with this tutorial, check out part 1, part 2 and part 3 first to get set up.
Note: to learn more about the Elm language and syntax, check out the Elm Tutorial, the EggHead.io Elm course, subscribe to DailyDrip’s Elm Topic, James Moore’s Elm Courses or check out Elm on exercism.io.
I meant to finish this blog series a few months ago, but while I didn’t finish the writing part, I did manage to do the code for part 4. If you’ve made it this far and would like to see the extracted elm code extracted into
then you can do so here:
https://github.com/rpearce/elm-geocoding-darksky/tree/pt-4/src.
I’d like to move on to other topics, and unfortunately, this is the best way I know how to do so.
Until next time,
Robert
If you’d like to code along with this tutorial, check out part 1 and part 2 first to get set up.
Note: to learn more about the Elm language and syntax, check out the Elm Tutorial, the EggHead.io Elm course, subscribe to DailyDrip’s Elm Topic, James Moore’s Elm Courses or check out Elm on exercism.io.
In this post we will use Elm to fetch and display the current weather based on the geocode data we receive from an input field.
The project we’re making will be broken into parts here (branches will be named for each part): https://github.com/rpearce/elm-geocoding-darksky/. Be sure to check out the other branches to see the other parts as they become available.
The code for this part is located in the pt-3
branch: https://github.com/rpearce/elm-geocoding-darksky/tree/pt-3.
Let’s get the weather data for Auckland, NZ (-36.8484597,174.7633315). If we start up our DarkSky proxy and run
λ curl localhost:5051/forecast/-36.8484597,174.7633315
then we will see response data like this:
{
"timezone": "Pacific\/Auckland",
"currently": {
"summary": "Overcast",
"icon": "cloudy",
"temperature": 61.42,
...
},
"hourly": { ... },
"daily": { ... }
}
While all we care about are the summary
, icon
and temperature
properties
within the top-level currently
property, we will only use temperature
in
this part.
Disclaimer: DarkSky units are in us
by default. You can specify other unit
types by appending a units
query parameter to the end like this:
λ curl localhost:5051/forecast/-36.8484597,174.7633315?units=si
Read more about DarkSky request parameters in the DarkSky docs to customize your response data.
Now that we’ve got our data in the correct units, let’s model this data in Elm!
Based on our DarkSky response, let’s list out what we’re looking at:
currently
, which has 3 notable properties:
summary
icon
temperature
Since we have two levels of data, currently
and its child properties, let’s
create two type aliases to represent this data.
type alias Weather =
{ currently : WeatherCurrently
}
type alias WeatherCurrently =
{ icon : String
, summary : String
, temperature : Float
}
And now we can add a property to our Model
type alias that can be of our
Weather
type:
type alias Model =
{ address : String
, coords : Coords
, weather : Weather
}
Uh oh! Our Model
has a defaults function called initialModel
, and now that
we’ve added weather
into the mix, we’ll need to give that default values,
as well:
initialModel : Model
initialModel =
{ address = ""
, coords = ( 0, 0 )
, weather = initialWeather
}
initialWeather : Weather
initialWeather =
{ currently = initialWeatherCurrently
}
initialWeatherCurrently : WeatherCurrently
initialWeatherCurrently =
{ icon = "–"
, summary = "–"
, temperature = 0
}
These are defaults that we provide in the event that we have no data to work with (initially or if something goes wrong).
Just as we did in the geocoding post section on JSON decoding, we want to leverage NoRedInk’s elm-decode-pipeline to define how our JSON response should be structured and thus parsed.
decodeWeather : Decoder Weather
decodeWeather =
decode Weather
|> required "currently" decodeWeatherCurrently
decodeWeatherCurrently : Decoder WeatherCurrently
decodeWeatherCurrently =
decode WeatherCurrently
|> required "icon" string
|> required "summary" string
|> required "temperature" float
While we could use Json.Decode.at to potentially have less code, there is absolutely nothing wrong with being verbose if it leads to clarity.
We know that we’re going to have to send latitude and longitude Coords
to our
DarkSky proxy server, as well as any
additional options, so let’s define the URL for that and the fetching function
just like we did for geocoding.
weatherUrl : Coords -> String
weatherUrl ( lat, lng ) =
"http://localhost:5051/forecast/"
++ (toString lat)
++ ","
++ (toString lng)
-- this is where you can add your query params
fetchWeather : Coords -> Cmd Msg
fetchWeather coords =
Http.get (weatherUrl coords) decodeWeather
|> Http.send ReceiveWeather
To define an HTTP request in Elm, we need
* ✅ a URL to point to
* ✅ a package like Http to help us build the request
* ✅ a decoder to handle parsing the response data
* 🤷 a Msg
type that our update
function can pattern match on
Right! We can’t forget to add ReceiveWeather
as a Msg
type. It should be
almost the same as ReceiveGeocoding
:
type Msg
= UpdateAddress String
| SendAddress
| ReceiveGeocoding (Result Http.Error GeoModel)
| ReceiveWeather (Result Http.Error Weather)
| NoOp
When we handled our geocode response in the prior
post,
inside of ReceiveGeocoding
we returned ( newModel, Cmd.none )
, for we had no
further actions to take. Instead of our action in this tuple being Cmd.none
,
let’s instead call our fetchWeather
function and pass it our geocoded
coordinates:
update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
case msg of
-- ...removed for brevity
ReceiveGeocoding (Ok { results, status }) ->
let
-- ...
newModel =
-- ...
in
( newModel, fetchWeather newModel.coords )
-- ...
ReceiveWeather (Ok resp) ->
( { model | weather = { currently = resp.currently } }
, Cmd.none
)
ReceiveWeather (Err _) ->
( model, Cmd.none )
Again, at the end of ReceiveGeocoding
, we return our newModel
as well as the
command to go and fetch the weather with the coordinates we’re storing on our
newModel
.
Whenever the HTTP request and decoding gives us back a result with the Msg
type of ReceiveWeather
, we then update the weather property on our model
record to have the currently
data parsed from the decoder.
Finally, to make sure we’re doing each step correctly, let’s add the temperature to our view:
view : Model -> Html Msg
view model =
div []
[ form [ onSubmit SendAddress ]
[ input
[ type_ "text"
, placeholder "City"
, value model.address
, onInput UpdateAddress
]
[]
]
, p [] [ text ("Coords: " ++ (toString model.coords)) ]
, p [] [ text ("Weather: " ++ (toString (round model.weather.currently.temperature))) ]
]
Here we use Basics.round because an approximation is alright for weather.
Now, when you rebuild your code with ./build
, open index.html
and submit a
city/address name, you’ll first see the Coords
update on the page and then see
the Weather
result once it’s done.
Hooray! We can geocode an address and fetch the weather via two different proxy
servers and display a result! That’s great, but our Main.elm
file is getting
quite large, so stay tuned for the next part where we pull our code into smaller
chunks without losing clarity.
If you’d like to check out the code from this part, it is located here: https://github.com/rpearce/elm-geocoding-darksky/tree/pt-3.
Until next time,
Robert
If you’d like to code along with this tutorial, check out part 1 first to get set up.
Note: to learn more about the Elm language and syntax, check out the Elm Tutorial, the EggHead.io Elm course, subscribe to DailyDrip’s Elm Topic, James Moore’s Elm Courses or check out Elm on exercism.io.
Before we can send a weather forecast request to DarkSky, we need to geocode an address to get its latitude and longitutde. In this post, we’re going to use Elm and our geocoding server from Part 1 to geocode an address based on a user’s input in a text box.
Warning: this is a hefty post.
The project we’re making will be broken into parts here (branches will be named for each part): https://github.com/rpearce/elm-geocoding-darksky/. Be sure to check out the other branches to see the other parts as they become available.
The code for this part is located in the pt-2
branch:
https://github.com/rpearce/elm-geocoding-darksky/tree/pt-2.
What we want to do with our program today is create an HTTP GET request with an address that is input by a user and returns the latitude and longitude. These steps will get us there:
At the top level for our app, we only care about an address and latitude and
longitude coordinates. While the address’ type will definitely be
String, we
can choose between a record
or tuple to house our
coordinates; however, each of these values must be a Float
type, as
coordinates come in decimal format. For no particular reason, we’re going to use
a tuple.
type alias Model =
{ address : String
, coords : Coords
}
type alias Coords =
( Float, Float )
I like to keep my models/type aliases fairly clean and primed for re-use in type
definitions, so I created a separate type alias, Coords
, to represent
( Float, Float )
.
Let’s take a look at what a geocoding request’s response data for Auckland
looks like so we can understand what we’re working with.
{
"results": [
{
"geometry": {
"location": {
"lat": -36.8484597,
"lng": 174.7633315
},
// ...
},
// ...
}
],
"status": "OK"
}
If you’ve set up your geocoding proxy, you can see these same results by running this command:
λ curl localhost:5050/geocode/Auckland
We can see here that we get back a status
string and a results
list where
one of the results contains a geometry
object, and inside of that, we find
location
and finally, our quarry: lat
and lng
. If we were searching for
this with JavaScript, we might find this data like so:
response.results.find(x => x['geometry']).geometry.location
// { lat: -36.8484597, lng: 174.7633315 }
What would happen in vanilla JavaScript if there were no results, or those object keys didn’t exist? Elm steps up to help us solve for the unexpected.
Based on the geocoding response, let’s list out what we’re looking at:
status
results
geometry
objectgeometry
object has a location
objectlocation
object has both lat
and lng
properties, each of which use
decimal pointsSince we’re going to need decode these bits of data and reuse the types a few
times, let’s create type aliases for each of these concepts (prefixed with
Geo
):
type alias GeoModel =
{ status : String
, results : List GeoResult
}
type alias GeoResult =
{ geometry : GeoGeometry }
type alias GeoGeometry =
{ location : GeoLocation }
type alias GeoLocation =
{ lat : Float
, lng : Float
}
If you’re not sure what type alias
means, read more about type aliases in
An Introduction to Elm.
There are a number of ways to decode JSON in Elm, and Brian Hicks has written about this (and has a short book on decoding JSON), and so have many others, such as Thoughtbot. Today, we’re going to be working with NoRedInk’s elm-decode-pipeline.
First, we install the package into our project:
λ elm package install NoRedInk/elm-decode-pipeline
In our Main.elm
file, we can import what we’ll need from Elm’s core
Json-Decode module as well as the package we’ve just installed.
-- Importing from elm core.
-- We know from our type aliases that all we're working
-- with right now are floats, lists and strings.
import Json.Decode exposing (float, list, string, Decoder)
-- importing from elm-decode-pipeline
import Json.Decode.Pipeline exposing (decode, required)
Now we can write our decoders!
decodeGeo : Decoder GeoModel
decodeGeo =
decode GeoModel
|> required "status" string
|> required "results" (list decodeGeoResult)
decodeGeoResult : Decoder GeoResult
decodeGeoResult =
decode GeoResult
|> required "geometry" decodeGeoGeometry
decodeGeoGeometry : Decoder GeoGeometry
decodeGeoGeometry =
decode GeoGeometry
|> required "location" decodeGeoLocation
decodeGeoLocation : Decoder GeoLocation
decodeGeoLocation =
decode GeoLocation
|> required "lat" float
|> required "lng" float
Here we declare that we’d like to decode the JSON string according to our type
aliases, such as GeoModel
, and we expect certain keys to have certain value
types. In the case of status
, that’s just a string; however, with results
,
we actually have a list of some other type of data, GeoResult
, and so we
create another decoder function down the line until we dig deep enough to find
what we’re looking for. In short, we’re opting for functions and type-checking
over deep nesting.
Why does this feel so verbose? Personally, I’m not yet comfortable using Json.Decode.at, which might look like
decodeString (at [ "results" ] (list (at [ "geometry", "location" ] (keyValuePairs float)))) jsonString
But with the former approach, we get to be very specific with exactly what we are expecting our data to be shaped like while maintaining clarity.
It’s time to add our view
function. All we’re going for today is
address
by responding to the
onInput
eventonSubmit
eventCoords: (123, 456)
As usual, let’s download the official elm-lang/html package:
λ elm package install elm-lang/html
Then let’s import what we need from it:
import Html exposing (Html, div, form, input, p, text)
import Html.Attributes exposing (placeholder, type_, value)
import Html.Events exposing (onInput, onSubmit)
Each import is a function that we can use to help generate HTML5 elements which Elm then works with behind the scenes.
view : Model -> Html Msg
view model =
div []
[ form [ onSubmit SendAddress ]
[ input
[ type_ "text"
, placeholder "City"
, value model.address
, onInput UpdateAddress
]
[]
]
, p [] [ text ("Coords: " ++ (toString model.coords)) ]
]
Our view
function takes in our model and uses Elm functions to then render
output. Great! But what are SendAdress
and UpdateAddress
? If you’re coming
from JavaScript, you might think these are callbacks or higher-order functions,
but they are not. They are custom message types (that we’ll define momentarily)
that will be used in our update
function to determine what flow our
application should take next.
Thus far, we know of two message types, Update
and SendAddress
, but how do
we define them? If you look at our view
function again, you’ll see the return
type Html Msg
. The second part of this will be the type
that we create, and
our custom message types will be a part of that! This is something called a
union type.
type Msg
= UpdateAddress String
| SendAddress
| NoOp
We will be adding more to this shortly, but this is all we have come across thus far.
Staying consistent with The Elm
Architecture, we’ll define our
update
function in order to update our data and fire off any commands that
need happen. If you’re familiar with Redux, this is where the idea for a
“reducer” came from.
This is tough to do in a blog post, so please be patient, and we’ll walk through this:
update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
case msg of
UpdateAddress text ->
( { model | address = text }
, Cmd.none
)
SendAddress ->
( model, sendAddress model.address )
-- more code here shortly...
_ ->
( model, Cmd.none )
Let’s walk through this step-by-step:
UpdateAddress
, then
string
(defined in our union type)text
Cmd
to essentially
do nothing else (it’ll pass through the union type and settle on the
NoOp
)SendAddress
, then
In order to build and send HTTP requests, we’ll need to make sure we download the elm-lang/http package:
λ elm package install elm-lang/http
and import it:
import Http
In our update
function, we referenced a function named sendAddress
and
passed it our model’s address as a parameter. This function should accept a
string, initiate our HTTP request and return a command with a message.
sendAddress : String -> Cmd Msg
sendAddress address =
Http.get (geocodingUrl address) decodeGeo
|> Http.send ReceiveGeocoding
geocodingUrl : String -> String
geocodingUrl address =
"http://localhost:5050/geocode/" ++ address
Our sendAddress
function does this:
geocodingUrl
) and our decodeGeo
decoder functionHttp.get
to be the second argument for
Http.send
Note that Http.send
’s first argument is a Msg
that we haven’t defined yet,
so let’s add that to our Msg
union type:
type Msg
= UpdateAddress String
| SendAddress
| ReceiveGeocoding (Result Http.Error GeoModel)
| NoOp
Basically, we’ll either get back an HTTP error or a data structure in the shape
of our GeoModel
.
Finally, we now need to handle the successful and erroneous responses in our update function:
update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
case msg of
UpdateAddress text ->
( { model | address = text }
, Cmd.none
)
SendAddress ->
( model, sendAddress model.address )
ReceiveGeocoding (Ok { results, status }) ->
let
result =
case status of
"OK" ->
results
|> List.head
|> Maybe.withDefault initialGeoResult
_ ->
initialGeoResult
location =
result.geometry.location
newModel =
{ model | coords = ( location.lat, location.lng ) }
in
( newModel, Cmd.none )
ReceiveGeocoding (Err _) ->
( model, Cmd.none )
_ ->
( model, Cmd.none )
-- This should go with other `init`s
-- but is placed here for relevance
initialGeoResult : GeoResult
initialGeoResult =
{ geometry =
{ location =
{ lat = 0
, lng = 0
}
}
}
Instead of having success/error logic inside one ReceiveGeocoding
case match,
we use Elm’s pattern matching to allow us to match on the message and Ok
or
Err
results.
Again, let’s do this step-by-step:
ReceiveGeocoding
is OK
results
and status
variablesstatus
from the response to make sure all is well"OK"
, we try to get the first item in the results
list and
fallback to initialGeoResult
if there are no results (I love Elm for
enforcing this)"OK"
, we fall back to the initialGeoResult
location
record, build an updated model record, and
return itReceiveGeocoding
is Err
Now that we’re through the core of the application’s contents, we can wire up the remaining bits and get it to compile:
-- Define our HTML program
main : Program Never Model Msg
main =
Html.program
{ init = init
, view = view
, update = update
, subscriptions = subscriptions
}
-- Here is our initial model
init : ( Model, Cmd Msg )
init =
( initialModel, Cmd.none )
initialModel : Model
initialModel =
{ address = ""
, coords = ( 0, 0 )
}
-- We're not using any subscriptions,
-- so we'll define none
subscriptions : Model -> Sub Msg
subscriptions model =
Sub.none
Remember that you can look at the source code for this part as a guide.
This has been a massive post on simply fetching geocode data from an API. I’ve found it’s difficult to write posts on Elm in little bits, for you have to have everything in the right place and defined before it’ll work. Subsequent posts in this series will be shorter, as we’ll have already done the heavy-lifting.
Until next time,
Robert
This post will cover setting up Elm, a geocoding proxy, and a DarkSky proxy. We’ll need all of these things set up in order to get our weather app to work and not sacrifice our API keys.
By the end of this post, you will have a “Hello, world!” Elm app with a simple
./build
command, and you should be able to cURL
both your geocoding and
DarkSky proxies to receive response data that we will use in the coming lessons.
The project we’re making will be broken into parts here (branches will be named for each part): https://github.com/rpearce/elm-geocoding-darksky/. Be sure to check out the other branches to see the other parts as they become available.
The code for this part is located in the pt-1
branch: https://github.com/rpearce/elm-geocoding-darksky/tree/pt-1.
This tutorial assumes that you already have installed Node.js (I use NVM for managing Node versions and am using v8.3
).
Once you’ve got Node installed, we can begin.
From your favorite project folder, let’s create a new project folder named
elm-geocoding-darksky
and change the current working directory to be the new
folder:
λ mkdir elm-geocoding-darksky
λ cd elm-geocoding-darksky
You can install elm via any of the methods on the elm install page or by one of these methods:
brew install elm
npm i elm -g
for a global binary or npm init -y && npm i elm
to
create a package.json
file and install elm
to it; you’ll have to run this
latter method via npx elm
, as it’ll be looking for the binary in your
./node_modules/.bin/
directory)I’ve found that having a tool re-format my Elm code to an agreed-upon format makes me more efficient and makes it easier for others to read my code. Check out these projects for more on how to do this:
Our goal here is to compile our elm project to an elm.js
file and include that
on a webpage (which we’ll make in a minute).
First, let’s create a src/
directory to house our source code and a Main.elm
file within it:
λ mkdir src
λ touch src/Main.elm
Next, we want to install Elm’s HTML package so that so that we can access its HTML-related functions:
λ elm package install elm-lang/html
Within the Main.elm
file, add the following:
module Main exposing (..)
import Html exposing (text)
main =
text "Hello, world!"
Here, we import the Html
package that we installed, specifically expose the
text
function from it and then use that function to tell Elm that we want some
HTML-friendly text.
Note: to learn more about the Elm language and syntax, check out the Elm Tutorial, the EggHead.io Elm course, subscribe to DailyDrip’s Elm Topic, James Moore’s Elm Courses or check out Elm on exercism.io.
We can then compile this and output it to elm.js
:
λ elm make src/Main.elm --output=elm.js
You should now have a (quite large) file, elm.js
, in your project’s root.
We’re almost done!
Finally, create a new file, index.html
, and add the following to it:
<!DOCTYPE html>
<html lang="en">
<head>
<title>Weather</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
</head>
<body>
<script type="text/javascript" src="elm.js"></script>
<script>
(function(global) {
var node = document.createElement('div')
document.body.appendChild(node)
var app = Elm.Main.embed(node)
global.app = app
})(window)
</script>
</body>
</html>
If you run λ open index.html
, you should be able to view the file in the
browser and see Hello, world!
. Congrats! You’re primed and ready to start
building.
If you’re lazy like me, you can create an executable file, build
, that will
perform our elm make ...
command for us:
λ touch build
λ chmod +x build
λ cat <<EOF > ./build
#!/bin/bash
elm-make src/Main.elm --output ./elm.js
EOF
This executable can now handle whatever build options and processes we’ll add for the future (such as JavaScript minification & uglifying):
λ ./build
Success! Compiled 1 module.
Successfully generated ./elm.js
In order to not expose our API keys for geocoding and weather forecasts, we’ll be using a separate proxy server for each service. I wrote recently wrote a post entitled Node.js Geocoding Proxy with Paperplane where you can see a full explanation of what we’re doing and how to do it. If you don’t care about the how and why about setting up these little servers, then just continue on!
First, you’ll need to get a Google Maps API Key from here: https://developers.google.com/maps/documentation/geocoding/start#get-a-key.
Once you’ve done that, go ahead and clone or download the geocoding-proxy
project on GitHub and follow the
directions to get set up. Given you’ve got Node installed, you’ve copied over
the .env
file and set your API key in there, then running λ node index.js
should start the server. From another command-line tab, run this and see if you
get a similar result:
λ curl localhost:5050/geocode/Auckland
{"results":[...]}
If so, congrats! If you get stuck, create an issue on the geocoding-proxy issues page, and I’ll see if I can help.
(This is almost exactly like the geocoding proxy setup.)
First, you’ll need to get a DarkSky API Key from here: https://darksky.net/dev/.
Once you’ve done that, go ahead and clone or download the DarkSky-proxy project
on GitHub and follow the directions
to get set up. Given you’ve got Node installed, you’ve copied over the .env
file and set your API key in there, then running λ node index.js
should start
the server. From another command-line tab, run this and see if you get a similar
result:
λ curl http://localhost:5051/forecast/37.8267,-122.4233
{"latitude":37.8267,"longitude":-122.4233,...}
If so, congrats! If you get stuck, create an issue on the DarkSky-proxy issues page, and I’ll see if I can help.
Thank you for reading this far! Now that we’ve got our Elm app build process set up and your proxy servers ready to work, we can start constructing our application piece-by-piece in the next article in the series.
If you’d like to be notified of when articles are published, subscribe!
Until next time,
Robert
Converting addresses, cities and other locations to latitude and logitude and back again is something that is expected in the software application world today. Whether someone is asking for directions, plotting optimal beer delivery routes or tagging a photo of their cronut in a local cafe, managing location data is an important skillset for developers to have. Numerous services, typically in the form of application programming interfaces (APIs), exist to provide folks with ways of accessing this data. Today we’ll be using the Google Maps Geocoding API to complete the task of acquiring the geo-data for any place name; however, we will be creating a Node.js server as a proxy (a go-between) for our request instead of embedding this request in a browser.
If you are granted an API key for a service that is private and mapped to you, it is a good idea to keep it that way. If you commit this API key to source control or expose it via your frontend code, then someone could take your key and pretend to be you. In order to avoid this, it is recommended that you keep such keys hidden, for example, as environment variables set on a server. Thus, we are going to create a small server to act as a proxy between the client (a web browser, app or cURL) and the API in question: the Google Maps Geocoding API.
It is possible to do everything you need with Node’s http
package, but I like
the approach paperplane takes with
viewing the request and response aspects of handling an HTTP request as a pure
function where the request is the input and the response is what is returned
from it:
Request -> Response
whereas many Node frameworks’ handlers accept a function with the request and response as two arguments and not utilizing a return value, yielding the signature:
(IncomingMessage, ServerResponse) -> ()
The paperplane approach makes a good deal more sense to me. You can read more about the “why” on paperplane’s getting started guide.
The project we’ll be making can be seen in its entirety here: https://github.com/rpearce/geocoding-proxy/.
Note: what we’ll be making is by no means a production-level application, as that would be outside the scope of this post. However, there are some slightly advanced tangential topics that I will be glossing over (sometimes providing links to) in order to not write a book. Send me an email if I can be clearer in certain areas.
This tutorial assumes that you already have installed Node.js (I use
NVM for managing Node versions and am
using v8.1
) and optionally the yarn package
manager.
Once you’ve got Node and yarn installed, we can begin.
From your favorite project folder, let’s create a new project folder named
geocoding-proxy
and change the current working directory to be the new folder:
λ mkdir geocoding-proxy
λ cd geocoding-proxy
Once we’re in the project folder, let’s initialize a package.json
file to make
it easy to manage and hang on to our project’s dependencies:
λ npm init -y
or if you have yarn installed:
λ yarn init -y
You should now have a package.json
file with some JSON values in it.
Next, let’s install the tools that we’re going to use:
λ npm install --save axios dotenv paperplane ramda
or
λ yarn add axios dotenv paperplane ramda
You can get yourself an API key from this
page.
Once you’ve done this, you’ll need to copy the .env.example
file at your
project’s root (λ cp .env.example .env
) and replace the value of the GEO_KEY
with your API key. Your .env
file should look like
GEO_KEY=abcdefg-hijklmn-op
PORT=5050
Once your dependencies are installed, let’s create a server to see if we can get
things working. First, create index.js
at your project’s root and open it in
your favorite text editor.
λ touch index.js
Next, let’s import the packages we’ll be using and create a basic “Hello, World!” server:
// Make our .env configuration file available
require('dotenv').config()
// Import libraries
const http = require('http')
const { compose } = require('ramda')
const { json, logger, methods, mount, parseJson, routes } = require('paperplane')
// Application-specific code
const endpoints = routes({
'/': methods({
GET: req => (
Promise
.resolve('hello world')
.then(json)
)
})
})
const app = compose(endpoints, parseJson)
// Server options
const opts = { errLogger: logger, logger }
const port = process.env.PORT || 3000
const listening = err =>
err ? console.error(err) : console.info(`Listening on port: ${port}`)
// Start the server
http.createServer(mount(app, opts)).listen(port, listening)
(Read up more on how paperplane works on its getting started page or by taking a look at the demo application. Also check out Ramda’s compose function to learn about effective function composition.)
We can start the server in a terminal window by running
λ node index.js
Listening on port: 5050
From another terminal window, let’s use cURL to see if this works:
λ curl localhost:5050
"hello world"
It works!
Now that we know our server works, let’s see if we can get it to echo back a
location/address parameter we send it at a route we’ll create called /geocode
.
Let’s remove our '/'
endpoint and “hello, world!” code and add some for
geocoding:
const endpoints = routes({
'/geocode/:address': methods({
GET: req => (
Promise
.resolve(req.params.address)
.then(json)
)
})
})
The req
object gives us a params
object with the key address
, since that
was what we specified we’d like our parameter to be named by setting the
/geocode/:address
key in the routes
function argument.
With the new endpoint added, save the file, restart your server (stop it with
Ctrl + C
), and run cURL with a city name this time:
λ curl localhost:5050/geocode/Auckland
"Auckland"
We’re almost there! Instead of echoing back whatever address the server
receives, let’s instead make an HTTP GET request to the geocode API using the
axios
package:
const endpoints = routes({
'/geocode/:address': methods({
GET: req => (
axios({
method: 'GET',
url: 'https://maps.googleapis.com/maps/api/geocode/json',
params: {
key: process.env.GEO_KEY,
address: req.params.address
}
})
.then(json)
)
})
})
In this code, we are using the JavaScript Promise-based axios tool to create a
GET request to the geocode API. Take note of our params
object here; since
we’re using the dotenv
package and configuring that above, we get access to
the GEO_KEY
value in our .env
file, and we separately get to pass on the
address
param, as well. When this request is sent, the url
will look like:
https://maps.googleapis.com/maps/api/geocode/json?key=abcdefg&address=Auckland
After restarting your server, run λ curl localhost:5050/geocode/Auckland
again.
λ curl localhost:5050/geocode/Auckland
{"message":"Converting circular structure to JSON","name":"TypeError"}
Uh oh! If we log the axios result, we’ll see a big response object that we don’t
care too much about right now. The only key we want right now from this big
response is the data
key, so we can use Ramda’s prop
method to simply access this object key and pass
its return value down the chain:
// add `prop` to the require statement
const { compose, prop } = require('ramda')
// ...
const endpoints = routes({
'/geocode/:address': methods({
GET: req => (
axios({
method: 'GET',
url: 'https://maps.googleapis.com/maps/api/geocode/json',
params: {
key: process.env.GEO_KEY,
address: req.params.address
}
})
.then(prop('data'))
.then(json)
)
})
})
If all the stars have aligned and you restart and rerun the command again, you should see
λ curl localhost:5050/geocode/Auckland
{"results":[{"address_components":[{"long_name":"Auckland","short_name":"Auckland","types":["locality","political"]},{"long_name":"Auckland","short_name":"Auckland","types":["administrative_area_level_1","political"]},{"long_name":"New Zealand","short_name":"NZ","types":["country","political"]}],"formatted_address":"Auckland, New Zealand","geometry":{"bounds":{"northeast":{"lat":-36.660571,"lng":175.2871371},"southwest":{"lat":-37.0654751,"lng":174.4438016}},"location":{"lat":-36.8484597,"lng":174.7633315},"location_type":"APPROXIMATE","viewport":{"northeast":{"lat":-36.660571,"lng":175.2871371},"southwest":{"lat":-37.0654751,"lng":174.4438016}}},"place_id":"ChIJ--acWvtHDW0RF5miQ2HvAAU","types":["locality","political"]}],"status":"OK"}
Hooray! We now have geocode response data for Auckland like:
"status":"OK"
"formatted_address":"Auckland, New Zealand"
"location":{"lat":-36.8484597,"lng":174.7633315}
As you might imagine, having all of the request handling functions inside of
paperplane’s routes
function might get difficult to follow and modularize.
With that in mind, let’s first pull the handler function out and into its own
function:
const geocode = req =>
axios({
method: 'GET',
url: 'https://maps.googleapis.com/maps/api/geocode/json',
params: {
key: process.env.GEO_KEY,
address: req.params.address
}
})
.then(prop('data'))
.then(json)
const endpoints = routes({
'/geocode/:address': methods({
GET: geocode
})
})
You could now abstract the geocode
function to another file if you wanted to,
as well as the object that is passed to routes (think of a routes file that
requires in the different handlers it needs).
We can refactor the code above even further and make it a bit more functional and closer to being “point-free” by including a few Ramda helpers:
const { compose, composeP, curryN, path, prop } = require('ramda')
// ...
// Application-specific code
const getGeocode = curryN(2, (key, address) =>
axios({
method: 'GET',
url: 'https://maps.googleapis.com/maps/api/geocode/json',
params: { key, address }
})
.then(prop('data'))
})
const geocode = compose(
composeP(
json,
getGeocode(process.env.GEO_KEY),
),
path(['params', 'address'])
)
const endpoints = routes({
'/geocode/:address': methods({
GET: geocode
})
})
const app = compose(endpoints, parseJson)
This code accomplishes the same goal as before, but now we have accomplished a few things:
req.params.address
– what happens if any of those
returned null
or undefined
? Instead, we use Ramda’s path
helper.getGeocode
function
returns a Promise
thanks to axios
, so we need to use composeP
to
compose our Promise-returning function.key
and address
parameters at separate times. This is handy, for we could
partially apply our key
once, store that in a variable and reuse it over
and over with different address
es.json
helper from getGeocode
and
axios
, meaning that function can now be leveraged in other ways instead of
being hard-set to JSON.If this scares the hell out of you, fear not! Check out Andrew van Slaar’s Ramda lessons on egghead.io, and if you’re liking what you’re seeing, Dr. Boolean’s “Mostly Adequate Guide to Functional Programming”.
The project itself can be found at https://github.com/rpearce/geocoding-proxy,
but here is our index.js
file in its entirety:
// Make our .env configuration file available
require('dotenv').config()
// Import libraries
const http = require('http')
const axios = require('axios')
const { compose, composeP, curryN, path, prop } = require('ramda')
const { json, logger, methods, mount, parseJson, routes } = require('paperplane')
// Application-specific code
const getGeocode = curryN(2, (key, address) =>
axios({
method: 'GET',
url: 'https://maps.googleapis.com/maps/api/geocode/json',
params: { key, address }
})
.then(prop('data'))
)
const geocode = compose(
composeP(
json,
getGeocode(process.env.GEO_KEY),
),
path(['params', 'address'])
)
const endpoints = routes({
'/geocode/:address': methods({
GET: geocode
})
})
const app = compose(endpoints, parseJson)
// Server options
const opts = { errLogger: logger, logger }
const port = process.env.PORT || 3000
const listening = err =>
err ? console.error(err) : console.info(`Listening on port: ${port}`)
// Start the server
http.createServer(mount(app, opts)).listen(port, listening)
Tools like Node.js with paperplane make it very easy to create proxy servers to handle your requests in a safe fashion, so use them and always keep your API keys secret!
I’ve seen a some feedback asking about CORS (cross-origin resource sharing), so here’s how you can do it (useful for running things on localhost):
const { cors, ... } = require('paperplane')
// ...
// Server options
const corsOpts = { methods: 'GET' }
const corsApp = cors(app, corsOpts)
// ...
// Start the server
http.createServer(mount(corsApp, opts)).listen(port, listening)
Read more about paperplane’s CORS API in paperplane’s CORS docs.
]]>But it is as important to remember yourself and your values as you explore strange new lands. Frodo quoted Bilbo in The Lord of the Rings,
It’s a dangerous business, Frodo, going out your door. You step onto the road, and if you don’t keep your feet, there’s no knowing where you might be swept off to.
Thus, travel is as much about discovering wonderful new places and customs as it is holding on to what makes you who you are, hoping that the “you” that makes it back home isn’t the stranger you encountered on the road but a combination of the best aspects of both you and it.
]]>The following day consisted of my mind completely shutting down after being unable to answer the following questions:
Apple’s Healthkit app on my phone told me I walked 10 steps that day and not many more the next. Clearly, something had to be done.
After a few days of lethargy and absorption of what I’d done and what had happened, my mind became restless and went back in to “solve all the things!” mode. When something feels like too big of a task, I remember what my father has asked me since childhood, How do you eat an elephant? One bite at a time. Now, I am not advocating for the consumption of elephants, but you get the point. It was time to break the problem of “What do I do next?” in to many smaller, solvable problems.
Disclaimer: I am not an healthcare nor life expert; this is a story.
When it comes to the unknown, humility is an asset, for you cannot possibly know everything about everything, so it’s okay to ask people for help. If you’re in this position, make sure you listen to your human resources contact at the company you’re leaving, specifically
Additionally, consult your bank/brokerage firm/family/life liason about what your options are and what they think you need to do to ensure you are handling these things correctly. This is what I have been doing, and it is such a relief when someone can help you down your path.
Naturally, since I have no income until I either sell a product or do freelance work, expenses have had to be slashed. Eating out, pubs, entertainment, etc., are not great expenses until I add up their cost over a year. Rent, healthcare and car insurance are the major killers. There’s no way I’d be able to even be thinking this way with a family to support, so I’ve got it relatively easy. Luckily, I saved up a bit of money before quitting, so I am cushioned for a little bit.
However, frugal is the word.
Once I got past the immediate financial and health issues I was able to start tackling the next question: what do I want and need?
I came up with potential paths to go down, sought advice from family and friends, and came up with the things I need in my life:
There were so many other contenders that, after examing what mattered to me and defining what was just outside my sphere of knowledge and influence, didn’t make sense for right now.
Currently, I am spending my days focused on learning and working. Here’s my routine:
This daily routine allows me to get things done that matter to me, cut out the things that don’t and make sure I stop and smell the flowers along the way.
Thank you Emily, my family, Jason Vanderslice, Marty Bauer, James Dabbs and the rest of you (you know who you are) for your guidance, encouragement and friendship.
]]>It is never too late until it is. Right this very instant, no matter what your logical brain thinks, you can go home, bake a cake with grandma, ride bikes with your aunt, watch the sun set with a sibling and/or make dinner with old friends who are now married and settling down.
Make sure you ask yourself, if you are away, why you are doing what you are doing. If there is no good answer, act on it before the feeling passes. If there is a good answer, then stop what you’re doing and give someone you miss a call.
]]>As soon as you leave your large organization, there is a very strange sensation that is exhilarating and terrifying all at once: I am now responsible for everything, and if I fail at any one thing in this chain, I fail. What is encompassed by “everything?” Let’s take a look at what we’ll talk about for the rest of the post: software freelancing.
Here’s some very basic logic based off of these responsibilities:
What’s more, how do you know if the client you are selling is a client you actually want to work with? How can both parties vet one another?
One way to get around many of the issues you run in to while freelancing that are related more to the business side than what you actually create is to foster relationships with businesses locally (or remotely, if you can manage that across the Internet). Attending meetups, lectures, happy hours, conferences, hackathons, open houses and the like are great ways to meet folks in your community and get to know them and their businesses. The more knowledgeable all parties are with one another, the more they can suss out whether or not they want to work with each other. Once someone has a good experience with you, it is more likely that they will come to you again to solve their problems.
If you can build enough relationships, you may not need to go outside of your local community for work. But what if your city doesn’t have enough work for you or you travel regularly? What if you want to expand your freelance reach to a national level?
Without an agency or consultancy brand name behind you, finding work from organizations outside of your network could be difficult. There are a number of services which help you, such as Elance and Gun.io, but the one that really caught my eye was toptal.
What really stuck out to me was the concept of their only accepting what they call the “top 3%” of people that apply. Their rigid screening process ensures that only quality developers are admitted to their community. To me, this guarantees prestige and sets the bar very high. This also means that, as a marketplace, toptal would need to ensure that clients are legit and responsible, as well. In sum, toptal
and much more.
I want to work with the best clients, and I want clients to receive the best work possible. I have just started the process for joining the toptal Web developers network to pick up additional work, look forward to the challenges of the screening process and hope to make it through!
If you enjoy working in your local area, then get to work building relationships and a pipeline of backed up work as far as the eye can see with people in your area! But if you plan on going national/international, consider giving toptal a shot and show them what you’re made of.
]]>Update: I made a library, parse-md, out of some of this behavior in order to address the need of parsing metadata from markdown files.
My latest problem to solve was how, once I had a .md
(Markdown) file’s
contents, to go about parsing out the blog post’s metadata (see below: the
key/value pairs between the two ---
s).
---
title: This is a test
description: Once upon a time, there was a test...
---
# Title of my great post
Lorem ipsum dolor...
## Some heading
Bacon ipsum...
Once I split
this file based on newlines, I needed a way of finding the
indices of the metadata boundary, ---
, so that I could splice
the array in
to two pieces and be on my way. My first attempt at getting the indices looked
like this:
function getMetadataIndices(lines) {
var arr = [];
lines.forEach((line, i) => {
if (/^---/.test(line)) {
arr.push(i);
}
});
return arr;
}
getMetadataIndices(lines); // [0, 3]
This is a simple solution that any junior dev can do, and it accomplishes the
task… but it doesn’t feel right. I am iterating over each item, testing each
line and mutating an array variable when a condition is true. While it doesn’t
look like much, that is a good bit going on all at once. Instinct tells me that
each action could be its own simple method. I also don’t want to use a temporary
variable that I mutate. However, this removes forEach
from our options, as
forEach
returns the original array. map()
to the rescue! (or so we think).
function getMetadataIndices(lines) {
return lines.map(testForBoundary);
}
function testForBoundary(item, i) {
if (/^---/.test(item)) {
return i;
}
}
getMetadataIndices(lines); // [0, undefined, undefined, 3, undefined, undefined, undefined, undefined, undefined, undefined]
Crap. Because I only return when the test is true, map
doesn’t know what to
return, so it returns undefined
and moves on. It would be nice if we could
clean out these undefined
s!
How can we achieve the following desired functionality?
function getMetadataIndices(lines) {
return lines.map(testForBoundary).clean(undefined);
}
getMetadataIndices(lines); // [0, 3]
Let’s make a function on the prototype
of Array
called clean
:
Array.prototype.clean = function(trash) {
};
Here, we access Array
’s prorotype
and add our own custom method, clean
and
pass it one argument. Next, we need to filter
out all of the undefined
s in
our array.
Array.prototype.clean = function(trash) {
return this.filter(item => item !== trash);
};
But what if we need to clean more than one value out? What if we need to clean
null
, ""
and undefined
?
In JavaScript, variadic behavior is a fancy term applied to functions that can
accept and handle any number of arguments, and these are typically accessed
within the function via the arguments
object, which looks like an Array
but
is not. For example, this code will give you an error about indexOf
not
being defined on arguments
.
Array.prototype.clean = function(trash) {
return this.filter(item => arguments.indexOf(item) === -1);
};
Drats! arguments
is very similar to an array — how can we get this to work? slice
to the rescue!
Array.prototype.clean = function() {
const args = [].slice.call(arguments);
return this.filter(item => args.indexOf(item) === -1);
};
Without any additional arguments, slice
makes a copy of an array and allows us
to provide a custom receiver of array-like functionality: arguments
. What is
returned from the second line above is an array-ized copy of arguments
. Now
that args
is an array of all the arguments that are passed to clean
, we can
pass as many options as we would like to clean out our array!
Here is more example usage of such a method:
// Usage
const arr = ["", undefined, 3, "yes", undefined, undefined, ""];
arr.clean(undefined); // ["", 3, "yes", ""];
arr.clean(undefined, ""); // [3, "yes"];
In attempting to refactor some fairly simple, though multiple-responsibility code, we end up creating a few reusable functions that will benefit us in the future, and we make our code more maintainable, testable and readable in the end. Here it is once we have finished:
function getMetadataIndices(lines) {
return lines.map(testForBoundary).clean(undefined);
}
function testForBoundary(item, i) {
if (/^---/.test(item)) {
return i;
}
}
Array.prototype.clean = function() {
const args = [].slice.call(arguments);
return this.filter(item => args.indexOf(item) === -1);
};
But could this be done even simpler?
You may have been wondering why we didn’t use reduce
like this from the start:
lines.reduce(function(mem, item, i) {
if (/^---/.test(item)) {
mem.push(i);
}
return mem;
});
or, cleaned up a bit,
function getMetadataIndices(mem, item, i) {
if (/^---/.test(item)) {
mem.push(i);
}
return mem;
}
lines.reduce(getMetadataIndices, []);
Surprise! We totally could have, but since reduce
was not our first thought
when refactoring, we managed to solve our problem in another way. There are 1000
ways to solve problems, and sometimes you don’t think of the best one first, but
you can still make the best with what you have at the time and refactor later.
The browser environment is one big JavaScript closure
that will encapsulate in
its scope
all of the code that is to be run. Because of this, any functions or
variables that are created in <script>
tags or external .js
files that are
not defined within a function will end up as global variables! And we all know
that global varibles are bad. Let’s dig in to this some more.
Every time you define a function and then define a variable with var
inside of
that function, that variable only exists inside of that function. For example,
what is the value of result
that is logged to the console?
// app.js
function kelvinToFahrenheit(kelvin) {
var result = Math.round(kelvin * (9/5) - 459.67);
return result;
}
kelvinToFahrenheit(274.3);
console.log(result);
The correct answer would be undefined
(with a nice error), for result
only
exists within the scope
of the kelvinToFahrenheit
function. However, the
function kelvinToFahrenheit
now exists globally.
Why does this matter? Well, when you include a script on to a web page, its code
now becomes part of this global closure. So if you define
function kelvinToFahrenheit()
without giving it a separate closure or
namespace (more on namespaces in a second), then it is now a “global function,”
meaning that it exists in the global namespace. If any other library you ever
include uses a variable called router
, your variable (or that library’s) is
going to overwrite whichever came before it and cause massive issues. The same
thing is true for variables:
// app.js
var currentTempInKelvin = 294.11;
So what are your options?
// app.js
;(function() {
// your code here
})();
The semi-colon here is a defensive technique used for when files are concatenated together–if somebody in one file forgets to close their file/library/definition out with a semi-colon, then your code is going to be an extension of theirs.
The ()
towards the end is nothing more than the invocation of the immediate
function we’ve defined.
Thus, when you write
// app.js
;(function() {
var currentTempInKelvin = 294.11;
})();
and then you try to console.log(currentTempInKelvin);
from the browser’s
JavaScript console, you will get undefined
, for currentTempInKelvin
now only
exists within that anonymous function’s scope. Hurray! No more globals.
But what if we want to access something in a global fashion? We know about the problems of name-clashing, so let’s also try to reduce that. Let’s combine what you did with the immediate function and do global variables in a less-bad way using namespacing.
Namespacing allows us to limit our use of global variables to one global by
nesting all of our functionality within one global object that we’ll call
WeatherApp
.
// app.js
// No var declaration means global!
;(function() {
WeatherApp = {
kelvinToFahrenheit: {}
};
})();
or
// app.js
;(function() {
WeatherApp = {};
WeatherApp.kelvinToFahrenheit = {};
})();
or (better)
// app.js
;(function() {
window.WeatherApp = {};
window.WeatherApp.kelvinToFahrenheit = {};
})();
or (recommended)
// app.js
;(function(scope) {
scope.WeatherApp = {};
scope.WeatherApp.router = {};
})(this);
This last method allows you to use this code and pass in any contextual scope.
Since this
is equivalent to window
at the global level, when you run this in
the browser, this
is window
, so WeatherApp
will be added to the window
global.
When you leave out the var
, you create a global variable, so be careful! I
recommend being explicit with to what object you are adding a namespace
. If
you’re going the global variable route, then you should nest every single thing
you’re doing inside of your WeatherApp
namespace in order to avoid having more
than 1 global variable.
This is a great pattern to utilize when you have relatively simple JavaScript you would like to add to a webpage and not have its contents clash with other libraries & code. If your code begins to get too complicated for this file, then we can start to look at the CommonJS module exporting & requiring pattern that is currently implemented by the wonderful Browserify library (aka, Node.js but in the browser). I may cover this in the future, but in the mean time, leverage the power of immediate functions for great good!
]]>2023 update: This article, while still good, doesn’t take into account the defer, attribute. Here is a great javascript.info article on async vs. defer.
Where do you place your <script>
tags to load your JavaScript for your
website? If you’re doing this within the <head>
element, you might
want to consider whether or not this is the best option for you.
So long as HTTP/1.1 is what your website is accessed via (which it will be
a long while), <script>
tags will be used to fetch external JavaScript
files whose contents will be included on the page. These typically look
like <script src="app.js"></script>
. <script>
tags are by default
“blocking,” meaning that the web page has to pause its download & render cycle,
fetch and load the JavaScript and then continue on. Here is what this looks
like:
<html>
<head>
<script src="app.js"></script>
</head>
<body>
<div>My Website</div>
</body>
</html>
The worst thing you can do is load multiple scripts in this blocking fashion:
<html>
<head>
<script src="jquery.js"></script>
<script src="jquery.lightbox.js"></script>
<script src="some_file.js"></script>
<script src="app.js"></script>
</head>
<body>
<div>My Website</div>
</body>
</html>
so make sure you combine (concatenate) all your JavaScript files in to one file. But this is still not ideal, for you have a blocking script that will have to download before anything else happens.
When we throw <script>
tags at the end of the <body>
, we allow for the page
to paint and then go and fetch the JS synchronously (this lets the user see and
utilize the page, but the scripts still haven’t finished loading).
<html>
<head></head>
<body>
<div>My Website</div>
<script src="app.js"></script>
</body>
</html>
However, since the page is still loading, search engines might punish you for a long(er) loading time. What we might need, instead, is the ability to asynchronously fetch the JavaScript after the page is finished loading.
There are two popular methods for fetch JavaScript in an asynchronous manner.
The first is to simply include the HTML5 async
property:
<script src="app.js" async></script>
Or, if you need to support older browsers, add an event listener to the window’s
load
function to dynamically build a script tag and append it to the page
(note how I do not use window.onload =
):
<script>
window.addEventListener('load', buildScriptTag);
function buildScriptTag() {
var script = document.createElement('script');
script.src = 'app.js';
document.body.appendChild(script); // append it wherever you want
}
</script>
Why didn’t I use window.onload =
here? When you assign a browser callback
trigger a value, it can only have one value! When you add an event listener, you
allow the window’s load functionality to have more values in the future.
If your app/website is architected to rely on JavaScript before it renders anything, then you can utilize this asynchronous technique with a “Loading…” graphic that is removed when the JavaScript loads. Ultimately, you want to decrease the amount of time it takes for a web page to perform an initial load so that the user can get started using your project as quickly as possible. With asynchronous loading of JavaScript, you enable your users to get going ASAP and then allow for them to have a fancier experience once things load in the background.
]]>In a long-distance relationship? I currently am, and it sucks, but that will change someday. My SO (significant other), who is an ocean away and five hours ahead, and I were recently celebrating the passing of time and wanted to share a movie night. How were we to do this?
There are many options out there that we explored, such as
Gaze, which has a great design (though a questionably
shaky platform), but only supports .mp4
, .ogg
and .woff
files (even if you
alter the webpage to accept all file types). We had our movie in .avi
and
.mov
, so this wasn’t going to work.
We also tried a few other services but to no avail, so we put our thinking caps on and solved the problem with these simple steps:
This was surprisingly easy, and Skype lets you keep a small version of the person you’re video calling on top of all other screens, so I put her on the top left of my screen and presto! Long-distance movie night.
]]>But then Margaret comes back to you and says that Jane isn’t sure why you said she could solve this problem. Jane is confused, you are confused and Margaret is left floating in ambiguity without any direction nor answer and must go ask someone else and begin this cycle again. If this conversation happens via email, expect it to take multiple days and span multiple threads.
Why did this happen?
When someone reaches out to you and asks a question, and you believe someone else has the answer, then you are now effectively a hinge between these 2 people.
You are what connects that person’s problem with his/her solution. When you do not act as a hinge does and connect the two different parties, then both remain separate (read: you will now spend significantly more time than you would have originally spent figuring out a solution).
Delegating responsibility and empowering others are pivotal skills for leadership of any sort. How does it feel when you ask someone for their assistance, and they are incredibly helpful in getting you where you need to go? It feels pretty darn good. Why? Because you have a guide that either knows the way or knows someone who does. You know that you are in safe hands and that your problem will be solved. You have a clear path to a solution and are thus empowered.
If someone asks you a question or comes to you with a problem, it is because they believe you possess the solution. Instead of dumping them on to someone else, it might be in everyone’s best interest to set up an introductory email/Google Hangout/Skype call to act as the hinge you are and hand one person off to another for safe-keeping. This way, you leave nothing to question, and if there are any issues, your group will immediately be on the same page and be able to find a solution much more quickly.
]]>I have recently switched from one dream job to another one. For the past 1.5 years, Articulate has allowed me to roam from Washington, DC to Charlotte to Charleston to Atlanta to Greenville to Miami to Utah to London to Edinburgh to Berlin to northwest Spain to Denver and beyond. Without such an opportunity, I would never have met many of the people nor had the experiences that mean so much to me and helped me grow as a human being. Hopefully I helped a few folks along the way, as well.
Now, however, I am shifting gears. The Iron Yard will be my new home and might well take me to whole new levels of understanding, inter-personal relationships and travel. I will be teaching software in an accelerated course to highly motivated people who, of their own volition, are willing to spend >40 hours per week in my presence (the horror!). The first course will be in Charleston, SC, and other courses will be taught in an undisclosed location overseas (I don’t think it’s been officially announced).
"But Robert! How does this help you travel?"
I’m glad you asked! Each “cohort” (class) is 12 weeks, and apart from the travel associated with being in different teaching locations, the work schedule goes like this: 3 months on, 1 month off.
I know—badass; time for travel, personal development and freelancing.
But what allowed me to get to this point, apart from dumb luck? Self-discipline. If you have the self-discipline to find your way through the boredom of working your back-side off day and night to not only learn new things but explore, meet new people and foster relationships with human beings of every race, gender and creed, you can do anything.
The Iron Yard - Charleston is still taking applications for the class starting June 15, 2015. Apply here and learn you some code.
]]>“Silicon Valley,” the name that was copied and applied to Charleston as “Silicon Harbor,” got its nickname from the concentration of companies who specialized in making semiconductors and other computer-related products in the southern part of the San Francisco Bay Area. Silicon, which itself is a metalloid that is the most common element on our planet besides oxygen, is not unique to the Bay Area. It is in that location that an entire economy and culture grew around the semiconductor and then microprocessor and then software industries. Charleston, however, has no such story to support calling itself “Silicon X.” Here is a list of locations that have also called themselves “Silicon X” in the hopes of getting the scraps from Silicon Valley:
…and many more from the List of Places with “Silicon” Names.
While this might all sound quite negative, it is necessary in order to break folks out of their “Silicon Harbor” daydreaming and recognize that riding the coattails of successful communities is not always the path to creating one. A community should be able to stand on its own merits, be unique in the world and at least strive to be its own unicorn-like entity.
Charleston has an impressive number of technology and creative companies, as BoomTown’s useful map compilation shows, given the size of the city. Ask most anyone in the developed world what San Francisco means to them and the answer will include “Silicon Valley,” “technology companies” and “innovation.” Ask the same about Charleston and the answer will include “beautiful architecture,” “great food,” “Southern Hospitality” and maybe “Boeing.” Nowhere outside of South Carolina will you hear people speak of Charleston’s technology scene.
Why is this? Is it a lack of employment opportunities offered by companies in the area? Somewhat. Is it a lack of marketing? Partially. Is it a lack of novelty and significance in global consumers’ lives? Definitely.
If Charleston is not going to be cranking out Apple/Google/???-ambitious tech, then it needs to learn to compete on other fronts. Human resource software, real estate software, healthcare software and assistive financial software are all important industries and employ a great number of people in the Charleston area. They also benefit their community and all give back in innumerable ways. However, there have yet to be any ground-breaking technologies to come out of Charleston since Automated Trading Desk, formerly led by Steve Swanson, busted on to the scene in the late 80s through the late 2000s with its utilization of high-frequency trading, a technology that revolutionized the stock trading industry. If Charleston dares to be great, then it should lead by example.
Personally, I am not as successful as the current folks Mayor Joe Riley courts and supports as drivers of community growth. I have nothing to show. But what I do have is an outside view of the community while still maintaining my membership of it and participation in it. What I see is a city that has many of the traits of a progressive city on the rise but which constantly compares itself to others instead of charging forward as a leader.
It is in this light that I suggest Charleston abandon the overused and cliché “Silicon X” tag and adopt a new nickname (sorry to all you organizations who have named yourself “Silicon Harbor X”). Nobody uses it outside of the Charleston tech community, and nobody will (just as you don’t call Baltimore, MD the “Digital Harbor”). It’s time to move on.
This new nickname should represent the vast presence of technology companies as well as evoke all the right emotions for emphasizing that Charleston is a fantastic place to work, live and raise a family. But it should, above all else, represent the location and culture, itself, without having to rely on an overused, 60-year old term from the other side of the country.
]]>With the evolution of digital, mobile devices came a million new ways to distract ourselves. Every new Twitter mention, Facebook tag, Instagram #hashtag, email, text and push notification is a jarring intrusion in to whatever we are doing that yearns for our attention and ultimate dismissal. Ignoring the vanity associated with thinking that we are important enough that our responses cannot wait (I’m not talking about you, doctors; you have a free pass with your 13 pagers), there is another aspect with which so many of us are tied to our phones: we want to clear messages and push notifications out of our lives as quickly as possible in order to avoid the stress of having so many of them build up. Everyone loathes the feeling of a mountain of emails crushing down on them. Everyone feels guilty about not returning that email their boss sent them at 11pm about XYZ big important project of the quarter. But something is lost—ignored, even: the consideration of those around us. Instead of focusing on the anxiety these acts alone cause us, I would like you to consider the effects your actions have on those around you.
I want to tell you about a man who changed my life by setting an example. Paul Singh, former partner at 500 Startups and founder of Disruption Corporation, is a very well-known investor in the tech world and single-handedly transforms cities economically. You can safely bet that his phone is constantly buzzing with the latest killer app ideas, city construction issues and other problems that require his attention.
I was fortunate enough to get to spend a day with Paul, Marty Bauer and heaps of other great folks at The Iron Yard in Greenville, SC, last year, where Paul came to speak about what he was doing in Crystal City, Arlington, VA. We picked him up at the GSP airport and, after a night’s rest, three of us met Paul at Coffee Underground for breakfast and coffee. While we sat there discussing life, the universe and everything, I was surprised how attentive and conversational Paul was. He listened to every word someone had to say, waited for them to finish, thought for a moment and then provided a relevant response and/or follow-up question.
So what’s the big deal? Most people are taught from childhood how to have a conversation. Let’s hear more.
During his presentation, an attendee was most disrespectful, hijacking the presentation to seemingly blame Paul for the Industrial Revolution and subsequent child labor issues. As one would imagine, this had nothing to do with his presentation, but instead of putting the attendee down, Paul listened and responded to the attendee with respect while he received none. When it was clear there was to be no resolution to the original question, he closed the subject elegantly and continued his talk without skipping a beat.
What do these two personal acedotes have to do with mobile device etiquette?
Everything.
You see, Paul Singh, whether at the coffee shop or giving a talk or speaking with attendees afterwards, was present. He was there with each person in each moment. When I confronted him about his persona and charisma, he mentioned a book, The Charisma Myth, which I of course purchased soon thereafter. This book explained, broke down and reinforced principles of interactions with others that had been taught to me in my youth, but behind which I had never fully understood the “why”. While the book is a gold mine of information, there is a very large emphasis, with regard to exuding charisma, on presence. In short, if you are distracted and not paying attention to someone you are with, this person will consciously or unconsciously believe that they are not important enough for your attention and will thus likely stop seeking to be in your presence simply because of the way you make them feel.
Think about that.
How many times have you been distracted by something, not just devices, when you have been with someone? How many times have you not answered their questions because something your mind deemed more important grabbed your attention? You may not be able to recall these instances, for they mean nothing to you. But now that you have read this far, you will undoubtedly begin to notice others acting this way towards you. Don’t worry, for seeing negative habits in others is the first step to changing your own.
Most folks these days know not what they do. The younger generation doesn’t know a life without phonepads, and the older generation didn’t know to ubiquitously enforce an etiquette around them. I believe there will eventually be a pendulum swing away from the current state of device affairs where mobile device etiquette becomes a standard, and considerations for this will continue to show up in new device features.
In the mean time, an interesting “game” has popped up among 20-somethings where everyone at dinner places their phones in the middle of the table, one on top of another. If anyone retrieves their phone during dinner, that person must then pick up the tab. Gamification… who would have thought? This is a great start.
If you have to ask yourself,
"Is it appropriate to take out my phone?"
then the answer is probably no.
When your friends are all tweeting about how much fun they’re having during your birthday party at the pub, be careful with bringing attention to the subject. Not only will they be ashamed when they realize you are right, but they will also resent you for calling them out. However, being passive aggressive and hinting at their poor manners isn’t the answer, either.
It is said that when you point at someone, you’re pointing three fingers back at yourself. Instead, focus on making yourself better rather than being quick to judge others.
When you go to hang out with your little cousins or when you fly to give a talk to a group of strangers, turn your phone on silent (or Airplane Mode) and either keep it in your pocket at all times or even leave it in the car. If you have a necessary business meeting, plan it accordingly and if you must, take the call outside. Remember that every single person you come in to contact with is affected by the actions you take and the way you make them feel while they interact with you. Set the example and pass on good habits.
]]>The following statement may cause me a bit of flack in the South, but
"Denver has the most welcoming population I’ve ever come across in my travels."
Never in my life have I met such a warm, trusting, "do good" people who have a blind affinity for strangers of all shapes, colors and creeds. Since my first day here, I have questioned whether I simply haven’t met enough people or experienced enough of the city to see its faults. But my initial premise is supported day after day and experience after experience.
One can tell much about the culture of a city by a few things:
Drivers consistently stop for pedestrians to cross the street. Who would have thought this a novel concept? Additionally, if a driver pulls their car in to a cross-walk, 9 times out of 10 that person will wave, apologize and attempt to back their car up so folks on foot have space to walk. This alone speaks reams about a community.
I am a harsh judge of patrons’ character when it comes to their interactions with service industry workers. Is it right to judge people? No. Does doing so help me guage not only the person I’m speaking with, but a community as a whole? Yes, for watching an unnecessarily rude man be put in his place by other restaurant-goers is something special.
On the other side of the coin, I have yet to meet a rude person providing me food & bev service. A great example is at Renegade Brewing. The bar tenders here struck up conversation while I was having a beer and burger, alone, on Valentine’s Day. After a few minutes of conversation, I was introduced to some of their friends across the bar, and we had great conversation and beer before parting ways.
There are >200 breweries in Denver. ’Nuff said. These people like to have a good time.
My employer, Articulate, has paid for me to work three days per week out of the Density CoWorking spot. Density is located in a fun neighborhood about one block from the Marczyk Food Market. You know a place is cool when they have wall-mounted unicorn decor:
There are heaps of unique and delicious coffee shops to work from, as well. My favorite, thus far, is the Denver Bike Cafe on 17th Street.
While skiing in Denver could be fun after a heavy snow, the mountains are nearby and ripe for adventure. I’ll let the pictures speak for themselves.
I have only begun to scratch the surface with all that Denver has to offer and look forward to another month here.
]]>I am a fan of Facebook’s ReactJS library because of its DOM diffing (via the “virtual DOM”) and one-way data binding. React is a tool I use every day and have come to enjoy (sans-JSX), but I am always on the lookout for way to do things simpler.
A colleague of mine recently shared the second iteration of RiotJS with me. Of course, I was sucked in because it compared itself with React (a bold statement). You can view the comparisons between Riot and React for yourself.
One unfortunate fact about fledgling JS libraries is that they lack examples of how to accomplish common goals for the web. This is an attempt to help out with that.
Given you have a package.json
set up, you can easily install the Riot compiler
as a development dependency:
npm install riot --save-dev
and then set up Riot to run as an NPM script and watch for any changes
"scripts": {
"watch:riot": "riot -w src/ build/"
}
which is then run by a simple
npm run watch:riot
Alternatively, you can download the riot.js library via any of their recommended methods.
If you don’t have a package.json
and want to install Riot globally:
npm install riot -g
riot -w src/ build/
I decided to start small and make a tabbing example where clicking a tab shows content related to it underneath. Here is the final product:
Starting with a blank HTML document, add the <riot-tabs></riot-tabs>
tag to
your document:
<!DOCTYPE html>
<html>
<head></head>
<body>
<riot-tabs></riot-tabs>
</body>
</html>
As mentioned previously, we know we need the (very tiny) RiotJS library, so don’t forget to include it:
<body>
<riot-tabs></riot-tabs>
<script src="path/to/riot-2.0.1.js"></script>
</body>
Easy enough, right? Given Riot doesn’t write our applications for us, we will need to tell Riot to mount some component, which in this case is “tabs.”
<body>
<riot-tabs></riot-tabs>
<script src="path/to/riot-2.0.1.js"></script>
<script>riot.mount('riot-tabs')</script>
</body>
When we run this code through the browser, we’re going to receive an error
telling us that 'tabs'
is not a thing. Congrats! Time for Step 2.
Riot’s NPM package, as mentioned earlier, allows us to write and compile
pseudo-markup mixed with a little JS. To get started, create a src
folder and
add a tabs.tag
file to it, then run
npm run watch:riot
if you have an NPM script set up or
riot -w src/ build/
to compile and watch for more changes to the file/folder.
Back in the tabs.tag
file, add this:
<riot-tabs>
<h2>Tabs</h2>
<ul>
<li class={ tabItem: true }>Tab 1</li>
<li class={ tabItem: true }>Tab 2</li>
<li class={ tabItem: true }>Tab 3</li>
</ul>
</riot-tabs>
That looks almost exactly like vanilla HTML, save for the conditional class(es),
which we will use later with is-active
classes. Also, they are way better than
concatenating className
strings yourself.
Refreshing your browser will show you that you now have content that is nested
within a <riot-tabs></riot-tabs>
tag.
Next up, we can add in the different tabs’ contents:
<riot-tabs>
<h2>Tabs</h2>
<ul>
<li class={ tabItem: true }>Tab 1</li>
<li class={ tabItem: true }>Tab 2</li>
<li class={ tabItem: true }>Tab 3</li>
</ul>
<div class="tabContent">
<div class={ tabContent__item: true }>(1) Lorem ispum dolor...</div>
<div class={ tabContent__item: true }>(2) Lorem ispum dolor...</div>
<div class={ tabContent__item: true }>(3) Lorem ispum dolor...</div>
</div>
</riot-tabs>
Okay, this is no big deal, so far.
Being software developers, we hate writing things over and over, so let’s start with the tabs.
<riot-tabs>
<h2>Tabs</h2>
<ul>
<li each={ tab, i in tabs } class={ tabItem: true }>{tab.title}</li>
</ul>
<div class="tabContent">
<div class={ tabContent__item: true }>(1) Lorem ispum dolor...</div>
<div class={ tabContent__item: true }>(2) Lorem ispum dolor...</div>
<div class={ tabContent__item: true }>(3) Lorem ispum dolor...</div>
</div>
this.tabs = [
{ title: 'Tab 1' },
{ title: 'Tab 2' },
{ title: 'Tab 3' }
]
</riot-tabs>
Riot has a nice each={ item, i in array } attribute, similar to JavaScript’s for … in …
While we’re at it, why not iterate over the content items, as well?
<riot-tabs>
<h2>Tabs</h2>
<ul>
<li each={ tab, i in tabs } class="tabItem">{tab.title}</li>
</ul>
<div class="tabContent">
<div each={ tab, i in tabs } class="tabContent__item">{tab.content}</div>
</div>
this.tabs = [
{ title: 'Tab 1', content: "(1) Lorem ipsum dolor" },
{ title: 'Tab 2', content: "(2) Lorem ipsum dolor" },
{ title: 'Tab 3', content: "(3) Lorem ipsum dolor" }
]
</riot-tabs>
Next, we need to set a default “active tab” and “active content.”
We want to be able to specify a default tab and tab content. This is
accomplished via a conditional is-active
class on both the .tabItem
as well
as the corresponding .tabContent__item
. To keep track of what tab/content is
active, we can
this.tabs
array objectsactiveTab
property andactiveTab
<riot-tabs>
<h2>Tabs</h2>
<ul>
<li each={ tab, i in tabs } class="tabItem { is-active: parent.isActiveTab(tab.ref) }">{tab.title}</li>
</ul>
<div class="tabContent">
<div each={ tab, i in tabs } class="tabContent__item { is-active: parent.isActiveTab(tab.ref) }">{tab.content}</div>
</div>
this.tabs = [
{ title: 'Tab 1', ref: 'tab1', content: "(1) Lorem ipsum dolor" },
{ title: 'Tab 2', ref: 'tab2', content: "(2) Lorem ipsum dolor" },
{ title: 'Tab 3', ref: 'tab3', content: "(3) Lorem ipsum dolor" }
]
this.activeTab = 'tab1'
isActiveTab(tab) {
return this.activeTab === tab
}
</riot-tabs>
Since these are conditional classes, they will either be evaluated as true or
false (I believe that anything that is not falsy is considered true; for
example, new Date()
is considered true
). Here, we create a function called
isActiveTab
and call it from the item itself, but because the function is not
scoped to the item, we need to access the parent
scope and call the function
on that.
Finally, we need a way to react to events.
When we click on a tab, we want that tab to now be active, and we want the
corresponding tab content to be displayed. This can be done via an onclick
handler that calls a function on the parent called toggleTab
<riot-tabs>
<h2>Tabs</h2>
<ul>
<li each={ tab, i in tabs } class="tabItem { is-active: parent.isActiveTab(tab.ref) }" onclick={ parent.toggleTab }>{tab.title}</li>
</ul>
<div class="tabContent">
<div each={ tab, i in tabs } class="tabContent__item { is-active: parent.isActiveTab(tab.ref) }">{tab.content}</div>
</div>
this.tabs = [
{ title: 'Tab 1', ref: 'tab1', content: "(1) Lorem ipsum dolor" },
{ title: 'Tab 2', ref: 'tab2', content: "(1) Lorem ipsum dolor" },
{ title: 'Tab 3', ref: 'tab3', content: "(1) Lorem ipsum dolor" }
]
this.activeTab = 'tab1'
isActiveTab(tab) {
return this.activeTab === tab
}
toggleTab(e) {
this.activeTab = e.item.tab.ref
return true
}
</riot-tabs>
The onclick
event handler receives an event object that is packed with
information. What we want is the current tab that we are clicking on, and this
is accessed through e.item.tab.ref
, which is just the ref
property on the
tab
object of the currently iterated item
.
According to the Riot docs, when an event handler is called, Riot will
automatically call this.update()
and re-render the component. However, I found
that after I altered my data, I had to return true
.
Once this event handler is completed and the component is re-rendered, the correct tab and content will be displayed, and you will be happy.
In sum, playing with Riot was a mostly enjoyable experience, and I am thankful to the Muut folks for releasing it.
While there are quirks (single quote vs double quote issues, among others) and
opinions (neglecting the use of semi-colons, as well as return
s), this is a
promising UI library that I am definitely going to consider vs. React in my
future projects.
Let us begin with this sage tweet from Marty Bauer:
Sending a mass email and not bcc-ing everyone is like a kicker missing an extra point. You had one job.
For those of you who’ve never thought twice about this email field, it stands for blind carbon copy. The addresses in this field will not be shared with the other recipients of the email.
Nobody wants to read the 90 responses and side conversations from the folks that love to always “Reply All.”
Dear Startup Founder/Venture Fund Manager/Chamber of Commerce affiliate,
When you send out a mass email to a group of investors, you are forfeiting each one of these people’s emails (and a bit of their privacy) to the other recipients, as well as anyone to whom this email is forwarded. Don’t do this.
Love,
Responsible Users of the Internet
We’ve all made this mistake and (hopefully) learned from it. Don’t fret your hideous tendancies of emails past. Go forth from this day and send your 50 person email—using Bcc—with the confidence that you are not being a jerk.
]]>Some time ago, I began removing things from my life that did not help me on the path to being the person I desired to be. This is a very painful yet effective attempt at a remedy. Unfortunately, it is only a piece of the puzzle. While external factors influence our habits and tendencies a great deal, all decisions have an ultimate decision-maker: you.
I could go on with this post for days. Instead, I’ll make it simple:
Be kinder to the cashier at the grocery store.
Be someone whose positive attitude inspires others.
Be confident in accomplishing your goals or lifestyle changes.
Be better than you were yesterday.
A vast majority of the people in the States who inquire as to the nature of my job share with me that they feel working remotely would make them feel isolated; that they would lose their sense of team; that they would get bored.
Someone once said to me,
"Only boring people get bored."
Let that sink in for a second. Let yourself recognize that you might just be boring. While you think about it, read this introduction on boredom from Wikipedia:
"Boredom is an emotional state experienced when an individual is left without anything in particular to do, and not interested in their surroundings."
Comma usage aside, this says a lot about people who get bored. If you have nothing in particular to do, well, that’s great, but odds are that you do not have a family or are not contributing to humanity (there’s much to do here). Your choice. I would like to focus on the second point: "and not interested in their surroundings."
Working remotely allows you to work from wherever you want, given you can accomplish your work goals. It doesn’t matter if you are an accountant, human resource associate, technology professional or doctor. Thanks to the present-day methods of communication, knowledge and data can be transferred in any number of ways. Many remote workers choose to do their work from the comfort of their homes where they can be nearer to family and friends and not waste their lives away in commuter traffic. Others move to new locations as they please. Regardless of what these people choose, one fact remains: they can go wherever they want, or in another light, have an infinite opportunity for new, interesting surroundings. If they were boring before, they now have the freedom to never be bored again.
Now that we’ve addressed the boredom fear, let’s focus on the isolation and loss-of-team mentality.
I write software. You might associate this with sitting in a dark room, chugging Coca-Cola, solving complex problems with microwavable fish sticks and living in solitude. You wouldn’t be wrong about the fish sticks. But solitude? Can you say that you spend six hours at work speaking with anywhere from 1 to 14 people at a time? Physically, I might not be in the same room as someone, but by the end of the day I am exhausted from the sheer amount of conversation and problem solving I have with my colleagues. I just don’t have to smell them.
One of the attractions to the job I’m working now is that the company, Articulate, has no office and is completely virtual. I recently learned that it is the 2nd largest virtual company in the world. This means that I have the guidance of an entire company of remote workers. What sort of guidance do they offer?
"Live your life, and do not waste the opportunities and time you have."
So I packed a bag and moved to London. I travel to different areas of town ~4-5 days a week, find wifi, do my work, explore, meet people, go find another place to work and then go exercise at an outdoor gym. Oh, and I’ve been to Berlin and have Spain, Scotland and Ireland all coming up in the next 1.5 months. And I just got back from five days in Miami.
I am of the (generally agreed upon scientific) opinion that I shall not live forever, so I’ll be damned if I regret one day.
There is a program starting next year, Remote Year, that involves ~100 remote workers traveling to cities around the world for 2-5 weeks at a time. In each city they will be working, exploring, absorbing the culture and then moving on to the next city. I will be applying, and if you manage to find/have a remote job, so should you.
Trust me; you won’t get bored.
]]>Since 2007, The Great Outdoor Gym Company, at the behest of local councils, has installed hundreds of outdoor gyms & obstacles in parks all throughout the UK (notably London). There are surely other vendors, but this is the only one of which I know.
The gyms sport everything from pullup & parallel bars to vertical presses to elliptical machines. Clapham Common, where I exercise, possesses at least three outdoor gyms that are heavily used every day and have surprisingly low maintenance plus resistance to the elements.
"Great; loads of gyms! Now what?"
The phrase weary traveler is fairly descriptive of the effects of travel on people. Whether you are walking, sitting or sleeping for a long time, the effects are similar: you are tired! And the last thing you want to do when you’re tired is exercise. This needs to change.
Do you have a good work ethic? Are you rarely late to work or meetings? What is it about these that make you attend and do so in a timely fashion? You probably say, "This is something I have to do, so I am going to power through and do it, even if I don’t want to." So why not treat exercise in this manner?
In short, you need to learn to treat exercise like an unavoidable daily event; like a meeting. If you can do this, then all procrastination and "I’ll do it when I feel like it" excuses go right out the door.
If you are worried about not having a place to work out in the area you are traveling to, do a little research beforehand.
Ultimately, all of these decisions come down to you. Whether you are on a short trip or are doing extensive traveling, if you take it upon yourself to strive for a sound mind and body, you will find a way to better yourself while on the road.
]]>Tonight, I lost a dear friend.
On this day, 23 September 2014, my family lost a constant. Chief, a member of our family, passed — by our hand — on account of aggressive cancer.
Chief made me better. Or, Chief made me make myself better. I will never forget one moment, one line of his life, I swear. He will always be my brother, my son, my guide.
I love you, man.
]]>Three weeks ago, a business partner and I received an email invitation from an event company, hy!, asking if we would like to attend and participate in a mobility conference in Berlin. Naturally, we said “yes!” without hesitation. After forgetting my passport, missing my first flight and running in to Tyrion Lannister (Peter Dinklage) in the Gatwick, London airport, I finally made my way to Berlin.
The AirBNB accomodations were thankfully booked by my business partner, Marty, who joined me on this trip. We stayed in a clean studio apartment with a balcony near the Uhlandstraße train station.
Given Marty was no stranger to the area, my first night consisted of a walking tour through the streets around our AirBNB with 0,5L-sized beers in hand (this is actually legal).
The following day consisted of attending the hy! mobility conference. In attendence were representatives from very impressive companies, including
and various other noteworthies. There were three group workshop sessions that debated various trends, upcoming technologies, issues and recent regulatory hurdles (sorry, Uber) many of the companies face. I felt incredibly welcome during the conference and was happy to be surrounded by so many passionate players in the mobility game.
After the conference, the attendees of the conference were allowed to drive brand new Audi A4s around Berlin to the location of our dinner, Tiergarten. Sadly, neither I nor Marty possessed German drivers’ licenses, so we had to ride in the back. Our "driver," John, and our Audi representative, Kathleen, decided we would not be going straight to Tiergarten and would be instead taking a slight detour to test the car’s capabilities along a more scenic route. Kathleen was hilarious (a great saleswoman!), and John drove like a pro.
The evening consisted of many half-liter beers, tons of food and fantastic company at the restaurant in the heart of Tiergarten.
German beer plus a late night equals a very late start the following morning! Once revived, Marty and I made our way to the headquarters of Berlin’s Startup Bootcamp for Marty’s meeting with the heads of a fellow GAN (Global Accelerator Network) accelerator (backstory: Marty is the Managing Director of The Iron Yard accelerator, also part of the GAN family, that is based in Greenville, SC). We arrived amidst a throng of camera crews interviewing startups who were prepping for in-depth sessions with mentors. Tanja (the co-MD) and Louise received us. While we were treated to free cappuccino and lunch and had a nice time speaking with Louise and a few startups, one thing I know from living in the South is that if you have guests in your house, you should never leave them to wander and, instead, should keep someone with them at all times. Meandering about someone’s home/open office space can be an awkward experience. Alas, this was not the case with us while we were guests, but I suppose they were simply too busy.
The remainder of our day carried us all over Berlin. We consulted Google Maps for our dinner and found a fantastic place, Dicke Wirtin, near our AirBNB around the corner from Uhlandstraße. We sat at a table with two middle-aged German men who were nice as they could be, ate shnitzel, and drank. I absolutely recommend visiting this place if you visit the area!
Below are a few images from the day & night.
My time in Berlin was an odd mix of the new with the antiquated; the shiny with the faded and dull; excitement for the future existing alongside the pain of the past. I thoroughly enjoyed every moment of my time here. Now, having embraced the friendliness of the people, tasted the delicious food and viewed marvelous structures, I must visit Berlin again!
]]>"Work when you want, where you want; just don’t be a jerk."
Sound too good to be true? It is true.
Articulate is a “remote” or “virtual” company, meaning it has no physical offices anywhere in the world, and everyone works wherever they like. Many of my colleagues work from home, whereas others tend to shift around a bit. It is this freedom that was (and is) a primary driving force for me wanting to work with this company.
London is an expensive town, especially when you’re exchanging American Dollars for British Pounds Sterling. NomadList.IO, a site that ranks cities to work remotely from, does not treat London kindly. At the time of writing, London has a -89 reputation… on a scale where positive is a good thing. While it is an expensive town, it is also incredibly friendly, and there is no shortage of WiFi.
So, where have I worked over the past two days? For starters, I had a small taste of South Kensington. I worked in a small coffee shop 2 blocks from the Victoria and Albert Museum, and around lunch I took a stroll through a few of the free galleries before receiving word that my roommate’s girlfriend was locked in the apartment and could not get out. She was freed soon thereafter when I returned home and unlocked the door.
Today was spent mostly near Clapham Junction (5-10 minute walk from our house) in, first, a Starbucks and then in an Italian cafe up the hill from our house (1-2 minute walk). Starbucks is fantastic for attempting to offer free WiFi to their customers. I’m not sure who started that trend, but it is great. However, in any big city I go to, Starbucks’ WiFi never actually works. It reminds me of the $1/ticket Megabus WiFi in the States (in short, it doesn’t work). Thankful as I am for even having access to the Internet, whereas so many people are without, I require it for my job. And I need it to be fast. Luckily, the little Italian cafe (ran by Francesca and, I believe, her son) have blazing fast internet.
I love not working in an office. I love the freedom. I love being able to walk and ride around and do what I want (or need) to do while still being a contributing member of my team.
Many people I know ask me,
"When are you going to get a real job?"
I always respond with, “Oh, I don’t know…” But what I want to say is,
"I have a real job. I have a salary, I have benefits and I have responsibility. Just because your definition of a real job involves long hours, living for weekends and generally hating life does not mean that it is a universal truth by which all must abide."
Instead of spending loads of time during the workday perusing CNN or Reddit or Hacker News, I am out exploring meeting new people and taking in as much as my limited time here will allow me.
London, so far, has been amazing and has allowed me to leave my comfort zone and discover people, places and experiences that I would never have had if I would have just stayed home.
]]>There were no external driving factors behind this decision; no significant other, no new job, no criminal charges, no desire to leave my home in Charleston, SC.
I love my family, friends, beaches, the spartina grass (marshes), the plough mud, and the odd—yet wonderful—assortment of personalities that exist in Charleston.
"If everything is so wonderful, why leave?"
Because I can. Because of this list of things I don’t have:
Here is another way to view this list:
I understand that ¾ of the items on that list are desired by many folks, and that what I have just described could also be classified as "responsibility-free." Or, as I have understood before, that the life of the traveler is rootless and thereby does not bear the burden of creating something lasting. I find this statement accurate; however, I also find that safety makes me complacent and breeds stagnation and settling: the opposite of my battle and striving for excellence, or the Greek areté. Thus, in order to better myself, I decided to shake things up a bit!
Additionally, and worthy of note, every single person I spoke with about traveling this way said this:
"If I were you, I’d be gone in a second."
I will miss my grandparents, parents, siblings, my dogs, aunts, uncles, cousins, surfing, and everything that makes Charleston great.
But I shall return!
After serenading the Delta counter attendants in the Charleston airport with my guitar and song, I had a sleepless and safe flight to London!
When I finished riding a number of trains and other ground transportation, I arrived in Clapham at Sam’s house, where I was greeted with smiles, cheers, and Heineken.
Soon thereafter, I decided to get to know my new area better. My solution? Go on a run and get lost. Here is my fitbit from that day:
You could say I got lost. I like to think I was "exploring." Nevertheless, I had a fantastic time and look forward to many more.
]]>What we agreed on was that being out in the wild, pounding pavement, selling your product is exactly what every single startup founder and/or entrepreneur should be doing. Doug told me that he likes my company, RidePost, because we are not hiding in our code, adding this and that feature, saying, "If we just add/change/remove this one feature, people will start to buy our product!"
I know this has been written about 1 x 10^255 times, but he said something in this conversation which really hit home:
"Your product doesn’t judge you; people do."
Fact: It is easier to hide in your product than take the beating that is the outside world.
Selling sucks. You know that terrible feeling in middle school where you ask the girl of your dreams if you can walk her home and she says no? Yeah, that. Every day. Forever. Most developers would rather deal with failing unit tests than deal with the anxiety and inevitable depression that is sales. However, when you don’t have your very own sales team, you have got to learn to take this. If you do not, then you should stop what you’re working on, return your friends/family funding and go home.
“But how will sales build the product I’m selling if I’m not focusing on product?” First of all, sales will not only allow you to continue building your product, but it will also drive what your product becomes. It is pointless to build a product that people do not need and are not willing to buy. For example: I can build a 100% gas-free, solar-powered lawn mower that leaves zero carbon footprint and is super duper in every respect. But nobody is going to buy this when gas-powered ones are cheaper and have worked fine for >50 years. Regardless of how amazing your product is, if people don’t need it right now, they are not going to buy it. Instead, let potential customers beat your idea in to the ground. Take their blows, but also take notes. You will likely see a pattern evolve after your dreams have been crushed in to the ground for the thousandth time. Let these sales experiences drive (and fund) what your product becomes.
TL;DR => Stop hiding in a fantasy world of product and go out and sell something first. When you’ve got customers beating down your door, then you can go heads down.
]]>