After learning the basics of Ruby and ORMs, I’ve been working on a project leveraging the Sinatra development framework. Along with Activerecord, I’ve used it to build a simple blogging application and I will describe the process of developing it here.

The project uses basic Model View Controller architecture. It has a few simple models that interact with various routes in the controller which then composes and serves the different blog pages through the views.


The first thing I worked on were the models that would represent the data necessary for the blog to work using Activerecord. This includes Users which would write and edit Posts, which would in turn have Tags. In addition, I used the model PostTags as well, but only as an intermediary for a join table to connect Posts and Tags. This was necessary because a Post has many Tags, and a Tag has many Posts, and any time there’s a mutual ‘has many’ relationship between models, a join table is needed.

These associations provide a useful way to list different types of Posts. All Posts belong to a User, so you can easily list all of a User’s Posts, or all of a Tag’s Posts as well. Providing an easy way to organize information is critical to building a successful application, and ActiveRecord provides ample ways of doing just that.

The Post model code:

class Post < ActiveRecord::Base
  belongs_to :user
  has_many :post_tags
  has_many :tags, through: :post_tags

  extend FindBySlug

  def slug
    self.title.downcase.gsub(" ", "-")

  def readable_date
    self.created_at.strftime("%b %d, %Y")


Unlike a framework like Rails, Sinatra has a unified router and controller. I used a separate controller for each model which helped organize all the different routes and actions. The Users controller focuses on signing up and logging in Users. The Posts controller deals with the majority of the application’s functionality, namely the creation, editting, and deletion of Posts as well as Tags (since Tags are created in the same page as Posts). Finally, the Tags and application controllers focus on a few utility tasks. As a whole, the controllers work together to facilitate the functionality of the website and process all the data.

An route/action to render the new post creation form:

get '/posts/new' do
  if logged_in?
    @tags = Tag.all
    erb :'posts/new'
    redirect to '/users/login'


Just like Rails, Sinatra uses ERB files (Embedded Ruby) to dynamically generate html web pages and serve requested content. The views correspond to the routes/actions in the associated controllers, so there are views for showing a user page, displaying a login form, and so on. Using Bootstrap for basic frontend styling helps make the application look at least palatable (as opposed to hideous unstyled html). It may be a bit superficial, but it helps the application look like a complete thought.

The ERB code to display all the posts for a tag:

<h2><%= %></h2>
<% @tag.posts.each do |post| %>
  <h3><a href="/posts/<%= post.slug %>"><%= post.title %></a></h3>
  <h4>By: <a href="/users/<%= post.user.slug %>"><%= post.user.username %></a></h4>
  <p><strong><%= post.readable_date %></strong></p>
  <p><%= post.content %></p>
<% end %>

In conclusion, the project took longer than I expected due to the necessary refactoring to deal with things like processing the form data during post creation and editting as well all the individual gears necessary for even a simple web project like this involving a back end. It was certainly an educational experience and a good step towards learning to build professional grade applications.

You can find the source code for this project here on Github.

In my process of learning Ruby, I’ve been working on a CLI gem that I’ll discuss here. My goal was to create a command line interface that could retrieve a list of the currently top selling games at the Steam Store. Upon retrieving the list, the user would then be able to select a game and get more detailed information.

The original intention was to organize the project through three classes: a Scraper, Game, and CLI. The Scraper would be responsible for scraping the data from the Steam website. The Game class would be used to instantiate objects for each game based on data retrieved by the Scraper. The CLI would then be used to tie the other classes together and create an interface for the user to interact with this data. In execution, most of this would go according to plan, but there would be some changes necessary.

I initially used Nokogiri for the Scraper class, then manually navigated through the CSS selectors on the Steam Store frontpage as well as the individual game pages. It was a tedious process, but worked well enough to acquire the necessary data. However, two major roadblocks soon arose:

1) For certain games, the Steam Store would put up an age verification page before allowing the user to continue to the game page, and when using Open-URI to access the page, it would be blocked. I looked into ways of getting around the age verification and it involved creating a cookie, but that was out of scope of what I wanted to accomplish with the project.

2) As soon as I finished the Scraper methods, the Steam Store changed its entire website and CSS structure for its holiday sale, and rewriting the methods temporarily only to revert to the old methods when the sale was over seemed tedious and inefficient.

So after doing some research, I was able to find a better way: leveraging the Steam Storefront API. I created a new class, SteamAPIHandler, to acquire the JSON data and parse it, using the data to create Game objects that could then interact with the CLI. This allowed me access to a much larger amount of data and proved to be much more efficient as well. For example, here’s an old Scraper method:

def self.scrape_top_sellers(store_url)
  return_array = [] #empty array that will contain the game hashes to return at the end of the method

  doc = Nokogiri::HTML(open(store_url)) #uses open-uri to open the url, then uses Nokogiri to parse in the html which we will then use to scrape data

  doc.css("div.tab_content#tab_topsellers_content div.tab_item").each do |item| #iterates over each game listing on the site, and gets name, location, and url
    return_array << { #pushes a hash of game data onto the array
      title: item.css("div.tab_item_name").text, #uses css to select the appropriate data
      price: item.css("div.discount_final_price").text,
      genres: item.css("div.tab_item_top_tags").text,
      url: item.css("a.tab_item_overlay").collect{ |link| link['href'] }.join,

  return_array.each do |i|
    x = i[:url].split("/") #splits the link into individual elements between the /s
    x.pop #gets rid of the ?info at the end of the link
    i[:steam_id] = x.last #sets the steam id to the end element of the link
    i[:url] = x.join("/") #joins the url and saves it without the ?info


…compared to a new SteamAPIHandler method that accomplishes the same thing:

def self.get_top_sellers
  doc = open("") #uses Open-URI to get the JSON file, ?cc=US for US currency
  data_hash = JSON.load(doc) #loads the JSON data
  data_hash["top_sellers"]["items"] #selects desired part of hash

In practice, the CLI, upon loading, uses the SteamAPIHandler to acquire the top sellers and create Game objects from the data. It prints out the information on the screen, then, when the user selects a game to get more information on, the CLI uses the SteamAPIHandler to acquire additional data and print it out. I had a method that acquired the additional information as soon as the CLI loaded, but it caused additional load time as it had to make ten queries to the API in succession, as opposed to one at a time where the load time is much less noticeable.

Overall, it was a good learning experience and I accomplished what I had set out to do. You can see the source code for this project here on Github.

Mitul Mistry

Mitul Mistry
I’m Mitul Mistry, a full-stack developer and designer. Contact me here: