Harvesting Content

Fluffy clouds
I did not take this photo

While working on the best Dallas/Fort Worth real estate agent's website, I realized that there was organic content already available. her Google business profile already had reviews that she was accumulating from her previous clients. Consuming Google Places API, these reviews could be leveraged to add content not only for search engines to pick up, but add feedback content for potential new clients to view.

Scene Cut: Enters Buzzwords

This is not the API you're looking for

Well best place to start is referncing the developer API guide. All API should require an API key, so we had to start there. For the time being, I used my own Google API key, which gives you the most five recent reviews for a given Place ID. Using a Google business profile for the requested Place ID removes this restriction. I woke up one morning and decided to template things up, and didn't want to go about getting login credentials.

I have my gripes with Google's API documentation. For the Places API, I found lacking response samples. There was almost none. There are some schemas, but I was looking for dummy template data to work with.

For the Place Details to get only the reviews data:

curl -X GET -H 'Content-Type: application/json' \ -H "X-Goog-Api-Key: API_KEY" \ -H "X-Goog-FieldMask: reviews" \ https://places.googleapis.com/v1/places/<place ID>

Here we set the fields header to reviews but if your after other data, tweak it to your needs. Since you're reading this in a browser, here is also a similar url to plug into your address bar:

https://places.googleapis.com/v1/places/<place ID>?fields=reviews&key=API_KEY

Using this I was able to get some live data to work with, and tweak some of their sample responses from their old API versions to "match" their schema. I was really starving for some dummy data.

Hugo

Here Undergoes Growing Organisms

So I built this thing using Hugo. It seemed simple, easily configurable, and more importantly, modifiable. A Hugo's site "source code" is based on markdown. This markdown is translated by node.js into html by kicking off the framework's build process. This makes eveything you work on during development, not exactly how it is going to be on the web server. The directory structure and files of the resulting web server are being defined by these markdowns. Browsers are built to translate web technologies into a user interface, and they are quick at doing that, when a web server just "serves". Hugo isn't built like a LAMP stack web server, that has to do server processing before it serves its files. Hugo already has all an end user would need ready to go, so the end user experiences things happening in their browser much faster.

Since Hugo is a static site generator, I needed somewhere to keep this data. Hugo has it's /data directory and is a handy feature, but didn't quite fit my needs. I'm open to opinions though. More on why later.

Instead I threw in a new /static/json directory with my dummy data json file. The static directory tells hugo, these files are to be served on the web as-is. Made some tweaks to my /static/css/custom.css, made a placeholder .../partials/reviews.html, and created a new /static/js/reviews.js to handle displaying the reviews. I went for the space optimising rotating carousel approach. This gives me the UI and a place to stash my json data, and yes I choose to keep it as json.

Integration

CI/CD workflow, AWS cloud deployment build cycle, and other buzzwords

So now I've got my framework set up for consuming the data. Assuming the misses starts pulling in millions of traffic on her real estate site, I didn't want to be blasting Google's API with request and loading this data live every time someone loads the page. I've already got my dummy data so I can have something deploying her site locally for development work.

After I did my work for IndexNow that I covered here, not malware, I opted for a similar approach. Now my build creates my json directory to stash my data and I the site can make the API call once during the build process.

        - >-
            curl -X GET https://places.googleapis.com/v1/places/<place ID>
            -H 'Content-Type: application/json' 
            -H 'X-Goog-Api-Key: API_KEY' 
            -H 'X-Goog-FieldMask: reviews' 
            -o public/json/reviews.json
sample YAML build step

There you have it. You can actually see the real estate Google reviews data live on the site.