Sathyajith Bhat's Blog

March 11, 2019

Setting up a secure Docker image scanning solution with Anchore and Drone CI

A while back I had done a round up of a few container scanning solutions and had mentioned I wanted to take another look at Anchore. The past few days, I’ve been playing a bit with Anchore – this time, integrating it with Drone CI.





Drone is a “Container-Native, Continuous Delivery Platform” built using Go. It makes use of a YAML file, .drone.yml to define and execute the pipeline.





End Goal



For this project, we will be integrating Drone and Anchore. With the setup complete, every push to the remote repository will trigger the Docker image to be built. The built Docker image will then be added to Anchore Engine for analysis and scanning. Drone integrates with most popular SCM tools – and for this project, we will integrate with Github.





Setting up Drone



Follow the instructions listed on Drone’s Installation Guide to set up Drone. A sample Drone server configuration and the command to start Drone is listed below. Make sure to substitute the client id and secret with the one generated from the setup






.gist table { margin-bottom: 0; }




Run Drone with the following command






.gist table { margin-bottom: 0; }




Once Drone server is up and running, head over to the Drone UI and click on “Activate” on the repo which you wish to integrate Drone with. Clicking on “Activate” sets up a Webhook on the repo so any activity against the repo results in an event being generated and the event is then pushed to Drone.





Setting up Anchore Engine



Follow the instructions on Anchore’s website to install and run Anchore. Once Anchore is up and running, we can use anchore-cli to interact with the image. Specifically, to scan the image, we need to:





Submit the image to Anchore Engine for analysisWait till the Analysis Engine is completeEvaluate the analysis against the policy engine



We can achieve this by the following sequence of commands





anchore-cli image add
anchore-cli image wait
anchore-cli evaluate check



Combining these commands with Drone’s pipeline we get this for the.drone.yml file






.gist table { margin-bottom: 0; }




Commit the .drone.yml file and push the changes to the repository. This results in the commit and push event being delivered to Drone, kickstarting the Drone pipeline.





Navigating to the Drone UI will show the pipeline stages and result of each pipeline stage. An example screenshot is shown below





[image error]



Comparing against the .drone.yml file, you can see that Drone created a new pipeline(boringly titled “default” consisting of 5 stages:





clone stage for cloning the repo. Although this isn’t listed in the .drone.yml file, Drone by default supports git and automatically adds the clone stage as the first stageBuild stage for building the Docker image and tagging it with the SHA of the commit.Analyze stage for submitting the built Docker image to Anchore for image and vulnerability analysisPolicy Check stage for evaluating the Docker image and validating whether the image is good to deploy or not. In my earlier post I’d mentioned that creating and editing policies is a pain – but recently, Anchore has released a centralized repository of policies that can be downloaded and installed.



If the policy check (or any stage) fails, the pipeline ends and does not trigger subsequent stages.





[image error]



You can extend the pipeline further, adding steps to retag the Docker Image and push it to Amazon Elastic Container Registry (ECR) – and Drone with its ECR plugin makes it very easy to do so.





What Next?





You can take a look at Drone’s Conditions and Triggers which lets you define and limit pipeline execution based on specific events/branches. Combined with writing your plugins, Drone can let you set up a complete, secure CI/CD platform for your Docker images.

 •  0 comments  •  flag
Share on Twitter
Published on March 11, 2019 23:16

October 2, 2018

So I wrote a book: presenting Practical Docker With Python

[image error]


So yeah that actually happened! I’ve always wanted to publish a book and thanks to Apress publishing – that is a reality now. The book is titled “Practical Docker With Python: Build, Release and Distribute your Python App with Docker” and is targeted at people who are new to Docker and want to containerize their application (with an example Python chat bot). The book starts with a brief introduction to containers and Docker, guides you on how to get started with Docker before diving into deeper topics such as Docker Networks, Volumes and Compose.


You can buy the book on Apress.com or Amazon either as an Kindle eBook or a paperback (if you buy both I will be very happy :P) or if you have a Safari Online subscription, you can read the book using the Safari Online app or the website for free.


I’ve spent a lot of time working on the book and I’d really appreciate feedback – whether as reviews on Amazon, Goodreads or as email – please do feel free to send me any feedback – I’d love to improve upon what I have started.


Round of thanks to my Adobe I/O colleagues(especially Sangeetha) for making a poster of the book cover and gifting me this as a poster – will treasure it forever!


 

 •  0 comments  •  flag
Share on Twitter
Published on October 02, 2018 00:16

September 2, 2018

On Securing Containers and Open Source tools for scanning vulnerabilities in Docker images


I recently published couple of articles elsewhere:







How to Increase Container Security and Ward Off Threats – Adobe Tech Blog
5 OpenSource tools for container security – OpenSource.com
Scanning Docker Images for Vulnerabilities with Aqua Microscanner – previous, on my blog





The former was adapted from a talk I gave at Container Conference India, the latter was glance at the open source container/Docker image scanning landscape. I plan to take a deeper look at Anchore, Clair, GVisor, Kata Containers and Docker Notify soon. Stay tuned!


 •  0 comments  •  flag
Share on Twitter
Published on September 02, 2018 07:07

June 15, 2018

E3 2018 Round up of trailers/games that I liked

E3 has come and gone by and  most of the year’s press conferences were boring (what was EA even smoking?). Having said that, some of these did grab my attention. Below are a list (in no specific order) of gameplay/trailers/things I’m looking for and thought were good. Enjoy




Skyrim Very Special Edition – Hilarious and very well done
The Elder Scrolls Blades – FPS RPG for mobile, play on portrait, landscape, sounds great, PvP, PvE,town building, and coming to phones, PC and VR – and all this for free.
Cyberpunk 2077 – 5 years since the original teaser came out. How time flies!
Death Stranding – Still have no idea what this is about and by the looks of it, not going to be liking it a lot either..
Trials Rising – Have had a blast with Trials Fusion but the artificial gating of levels by means of star count really annoys me. Hope they don’t the same for this
Marvel’s Spider-Man – Looks and plays amazingly well for this demo, how will it fare in the full game?
The Crew 2 – Seem to be trying to pull a Forza Horizon 3 and in fact playing it during the private beta made me feel like I’m playing a reskinned Forza Horizon 3 – and not in too good of a way.
Forza Horizon 4 -Forza Horizon 3 was amazing game but frame-rate issues were really annoying.  Hope FH4 won’t have these issues
Skull & Bones – Interesting premise but was hard to judge what part of the video was scripted and what was gameplay.
Kingdom Hearts 3 – Huge set of crossover characters. Catch the other trailers – The Showcase, Frozen
Ghost of Tsushima – Saving the best for last – looks really good, sounds fantastic and the gameplay looks like a mix of Witcher 3 and Metal Gear Rising.

Anything you feel I missed or you prefer? Drop a comment below

 •  0 comments  •  flag
Share on Twitter
Published on June 15, 2018 08:32

May 28, 2018

Scanning Docker Image for Vulnerabilities with Aqua MicroScanner

Containers are slowly becoming the standardized units of deployment. As containers become more popular, they also become the focus targets for attacking the system via vulnerabilities present in the packages within the image. There are quite a few container vulnerability scanning solutions (example: Clair, Twistlock, Aqua) – however most of them are either commercial or require an elaborate setup, which makes it difficult for individual developers to involve them as part of the container build process.



I found recently that Aqua has introduced a free-to-use tool called Aqua MicroScanner for scanning container images for package vulnerabilities. What makes this even more attractive and easy-to-use is that it doesn’t need any elaborate or predefined server setups – and all that is needed to use this is:



Get a token from Aqua
Add the scanner and run it as part of the container build process

If the image contains any packages with vulnerabilities, Aqua will present a summary of the vulnerabilities, the average CVE score as well as a list of the found vulnerabilities.


To get started with Aqua MicroScanner, register for a token


$ docker run --rm -it aquasec/microscanner --register

With the token available, add it as part of your build process. For example, if we were to check and scan an image based on nginx, the Dockerfile would look like below




FROM nginx:1-alpine

RUN apk add --no-cache ca-certificates && update-ca-certificates

ADD https://get.aquasec.com/microscanner .

RUN chmod +x microscanner

RUN ./microscanner



When we build the container with


$ docker build .


The scanner will be executed and will scan the Docker image. The vulnerability found will be displayed as below

"vulnerabilities": [

{

"name": "CVE-2016-3189",

"description": "Use-after-free vulnerability in bzip2recover in bzip2 1.0.6 allows remote attackers to cause

a denial of service (crash) via a crafted bzip2 file, related to block ends set to before the start of the block.",

"nvd_score": 4.3,

"nvd_score_version": "CVSS v2",

"nvd_vectors": "AV:N/AC:M/Au:N/C:N/I:N/A:P",

"nvd_severity": "medium",

"nvd_url": "https://web.nvd.nist.gov/view/vuln/de...",

"vendor_score": 4.3,

"vendor_score_version": "CVSS v2",

"vendor_vectors": "AV:N/AC:M/Au:N/C:N/I:N/A:P",

"vendor_severity": "medium",

"publish_date": "2016-06-30",

"modification_date": "2017-08-21"

}

]



The summary would be like so:


"vulnerability_summary": {

"total": 2,

"medium": 2,

"score_average": 4.3,

"max_score": 4.3

}


Aqua will stop the build if it finds any vulnerabilities of severity “High” – however, we can pass  --continue-on-failure flag to ignore the High severity issues and continue the build.


I think this tool is really good, especially for small developers – with just few lines of Dockerfile instructions, the developer is able add vulnerability scanning of the images – and combined with CI like that of Gitlab CI/CD Pipelines, it’s a good way of building vulnerability-free container images.


 


PS: I will be speaking about Container Security at Cloud Native Meetup: Containers & Serverless Deepdive. Do join if interested!

 •  0 comments  •  flag
Share on Twitter
Published on May 28, 2018 11:10

May 21, 2018

Convert newsletters to RSS feeds with Kill-The-Newsletter

Long time, no write! Newsletters have become all the rage these days and I guess for good reason –  they’re curated, come in (usually) once a week and typically offer a respite from the deluge of news that comes in why typical RSS Feeds or via Twitter. Naturally I subscribed to few initially and then the list of newsletters increased – and now I am stuck with a newsletter bomb in my Inbox



[image error]newsletters, newsletters everywhere

 


Filters was nice for classification, archiving them meant they would just languish in the filtered view, not to be looked at. I thought I’d be nice to somehow have them come to my RSS feed instead of cluttering up my Inbox and with a quick search I found Kill-The-Newsletter. This handy little web app creates a random email id for you to provide in the subscription mailbox and converts the incoming mails to RSS (well, to be specific, Atom) feeds. Kill-The-Newsletter is open source, so you can even self-host the app on your own servers.


 


Pretty nifty and has saved my Inbox from clutter.

 •  0 comments  •  flag
Share on Twitter
Published on May 21, 2018 21:32

November 5, 2017

A Brief Look at the Oculus Rift

VR and me go like chalk and cheese – ever since a kid, I’ve had motion sickness which limited me from playing most FPS games and my last attempt at VR(at IGX 2016) was a disaster – I barely could withstand 30 seconds of VR. Granted the game selection was bad – for me anyway (Driveclub on PSVR) – still I didn’t expect that bad of a reaction.




With that bit of context, the reactions that flew in when I told the folks that I(well Jo, my wife, to be more precise) bought the Rift was expected.


[image error]


 


So it wasn’t entirely my decision to buy it in the first place, but given the experience with the Rift so far, I think it’s been a great buy.


Unboxing & Hardware Setup

I’ll let the pictures do the talking.


 


[image error]


[image error]


[image error]


[image error]

[image error]

[image error]


[image error]

The Oculus Rift Touch bundle comes with 2 Touch controllers, 2x sensors, couple of AA batteries for the Touch controllers, the headset and a lens-cleaning cloth. There are few things worth mentioning:



I really, really liked the box pack. It was well designed, enough space to place all the components safely and pack it away
The battery door for the Touch controller has a magnet which means when you push it to close, it automatically snaps. That’s a nice feedback and a feature well thought of
USB Ports: This is something that I didn’t bother to check but Oculus recommends that you have a minimum of 3 USB 3.0 ports and a USB 2.0 ports. Some discussions on Reddit suggest that they may work on USB2.0 ports, but for best tracking and results, I think it’s better to get PCI-E cards which offer USB slots. My desktop had only 2x USB 3.0 ports, but luckily noticed this and grabbed an Anker 4 port USB 3.0 Hub which works very well.
GPU support: VR requires a fairly beefy CPU & GPU. The Rift also requires a free HDMI port on the GPU. While this may not be a problem, some of the GPUs might come with only one HDMI and the remaining DisplayPort ports + DVI ports(which was my case) – and you need to have both the VR headset as well the monitor connected – at least for the initial setup. Not having a second HDMI port was a big problem – thankfully I managed to find a spare DVI cable and connected my monitor via DVI and plugged the Rift into the HDMI. If you’re using multiple monitors – remember this and grab the required adapters as well.
The initial setup is a fairly involved process but this is not mentioned anywhere on the box(which doesn’t come with a manual) – yes software install is a breeze but when you have USB and HDMI ports dangling like a Hydra not knowing where to plug what was bit weird and had to search Oculus’ support site for the instructions. I’m not sure why they didn’t make a leaflet out of this. I realized later during the software setup that they prompt you to plug-in the required components – I guess I’m just too used to the old style of plug the hardware in and then do the software install

Software Setup and First Launch

Once you have the Rift hardware setup properly, Oculus will start the first time setup. This involves things like entering your height(to calibrate the ground height), touch sensor calibration, mapping out the play area, setting up the Guarding system(which is basically a wireframe “wall” indicating you’re about to exit the safe area). This doesn’t take too long and is a one-time thing, even if you have multiple people using the headset.


Where Oculus has nailed the VR experience is their first launch app, called “First Contact“. It’s basically set in a spaceship(or a room?) with a robot where the robot keeps giving you “programs” in a floppy that you “grab” it and push it into a 3D printer and then pick it up. It sounds like no big deal, but the detailing and the way the robot is done is incredibly awesome and will evoke a great response from all.


Comfort

The headset is far lighter and much more comfortable than the PSVR and the Vive. Also, something the other headsets don’t have – the Rift actually comes with over the ear headphones – and these actually sound really awesome. It sounds strange/trivial about the over-the-earphones, but putting on the headphones over the VR headset(or earphones before putting on the headset) when you can’t see a lot is a pain and the built-in headphones makes the whole experience seamless.


The Touch Controller is crafted very well, fits your palm nicely and doing gestures such as pointing, grabbing, making a fist feels so natural, you don’t feel that you have a controller on both your hands. The Touch Controller has some other neat features – when you have your hand in the field of view of the sensor, you see a pair of virtual hands so that you know how to grab the sensor. This seems easy but when you have your eyes covered by the headset, it’s not as straightforward as it looks.


Games

I haven’t played a whole lot of games – among the ones that I did play – Robo Recall came with the bundle and is regarded supposedly one of the best VR games and I can see the acclaim. You pick up the guns from the holster. You can catch bullets and throw it back. You can catch robots and throw them back. You can grab them and pull them apart. You can grab them, throw them in the air, grab your weapons and shoot them. All this while doing gestures just like how you’d do it in real. And while you’re doing all this, you’re reactively ducking to avoid gun fire, bending your knee to pick up guns or other things on the ground – it’s quite an experience and makes me why even on this date, the VR demos are the crappy low res Rollercoaster ones.


RecRoom is another great VR experience – it’s basically a big social club with some great mini games such as VR Paintball, 3D charades and so on. RecRoom is in early access, but is free for now.


I’m yet to pick up Unspoken(which basically made Jo purhcase the Rift) and will also pickup Diner Duo when it goes on sale. I did give Project Cars a go(again!) but yeah didn’t last long – the motion sickness made me uncomfortable before I could even say the word.


Summing up

If you’re still on the fence about VR, have a decent system capable of VR, I think the Rift bundle, especially with the US pricing of $400 is a great purchase. There’s loads of VR games – both free and paid and some of them are just that good to make the purchase worthwhile.


Have any questions? Drop a comment below or send me tweet, will reply.


 

 •  0 comments  •  flag
Share on Twitter
Published on November 05, 2017 07:46

April 28, 2017

Accessing Chef Databag Items from within attributes

In Chef parlance, databags are global variables saved in JSON format and are stored and accessible on the Chef server. Given that these are indexed and can be searched up along with the fact that they can be encrypted make them ideal candidates to store secrets such as credentials/ssh keys.


Chef provides an easy way to search and fetch databag and databag items from within a recipe:


For ex to fetch a databag called admins, it’s as easy as:


admins = data_bag('admins')

And to fetch databag items:


admins.each do |login|
admin = data_bag_item('admins', login)
user_name = admin['id']
ssh_keys = admin['ssh_keys']
groups = admin['groups']
end

Unfortunately, the data_bag and data_bag_item helpers are not accessible from within attributes and it seems as of now, the working way is to use Chef::DataBagItem.load method like so:


credentials = Chef::DataBagItem.load('admins','sathya')
 •  0 comments  •  flag
Share on Twitter
Published on April 28, 2017 03:08

March 13, 2017

Of nginx’s mid cut off responses and proxy buffers

Among the services I look after, the biggest and high-profile – is the user facing website. The website is your bog-standard typical frontend(powered by Express/Angular) which fetches data via an API which is powered by the backend(built on Rails). Typical flow is that Express receives the request from the browser, makes a request to the backend which is then served using Rails API via nginx which acts as the reverse proxy.



Couple of weeks back, the team received a support request that one specific route from an internal webapp(of similar architecture as the user facing website above) was throwing an 500 Internal Server error. Now in our case, a 500 server error is typically a sign that the backend was not able to complete the request successfully. I took a look at the application logs and the responses were all proper, nothing out of the ordinary. The error would come intermittently and since it was not a route that was heavily in use, I opted to have a deferred look at it.


A few days ago, the same problem manifested again but on a different route(this time, a more frequently used one) and I couldn’t afford to delay looking at this any longer.


I did some basic analysis:



The DB returns the data properly
Rails objects are correctly populated
The API returns the data
Browser console didn’t show any errors

So what was it that was causing the problem? I tried to make the same request with cURL and this time, I noticed that the API’s JSON response was truncated and not complete. This was something I didn’t notice earlier. Since it’s nginx which is doing the last-mile delivery, I checked nginx error logs there were a few of these:


[crit] 14606#0: *1562074 open() "/var/lib/nginx/proxy/7/02/0000000027" failed (13: Permission denied) while reading upstream, client: x.x.x.x, server: , request: "GET / HTTP/1.1", upstream: "xx", host: "xxxx"


Ah ha, now we have something to look for. But why the permission denied while reading upstream? Some Google searching and looking at the documents and nginx forums indicated that the proxy buffer was getting full and hence was not able to send the complete response, hence the truncated JSON responses.


Data from the responses that were cut-off showed that anything over 64kB was getting cut off, indicating the proxy buffer size was set to 64kB. But this was not defined in the nginx configuration anywhere. Some more digging around the documentation indeed confirmed that the buffer size was set to 64kB.


A small fix to increase the buffer size, a deploy via Chef and we’re all good again.


Some more reading:



Stack Overflow
Nginx Forums: Is proxy_buffering needed?
Nginx Documentation: Proxy Buffering

Moral of the story: know your platform defaults and keep revisiting your configuration settings, especially if they weren’t done by you/done long back!


 

 •  0 comments  •  flag
Share on Twitter
Published on March 13, 2017 04:50

February 1, 2017

Xenserver and adding/attaching a new storage to a VM

I had an instance today where a local VM(which is provisioned by Xenserver) was running low on disk space and wanted to increase the disk space allocated to it. Last time when I did it by increasing the space from within Xen Manager, I failed miserably(the VM was configured with LVM and neither pvscan or lvscan was able to see the increased space).


This time I tried a different approach:



rather than increasing the space of the attached disk, I created a new disk and attach it to the VM from Xenserver Management Console
Since the VM is configured with LVM, I decided to add the new disk as a Physical Volume(PV) and then extend the Logical Volume(LV) & Volume Group(VG)

Creating a new disk and attaching it to the VM from Xenserver management Console is fairly straightforward. First make note of the device to which the new disk is attached to. In this case, it is assumed to be xvdc. I’m also assuming that the volume group mesa-nl-vg exists and /dev/mapper/mesa–nl–vg–root is the logical volume path


Here’s the steps ahead




— Create new partition

sudo fdisk /dev/xvdc

— Create a new PV

sudo pvcreate /dev/xvdc1


— Extend the VG

sudo vgextend mesa-nl-vg /dev/xvdc1


— Extend LV. Note that the +100G indicates the size by which it should be increased

sudo lvextend -L+100G /dev/mapper/mesa--nl--vg-root


— Resize the filesystem

sudo resize2fs /dev/mapper/mesa--nl--vg-root


HowToGeek has a nice writeup & cheatsheet on LVM, you should read that to get you up to speed with LVM.

 •  0 comments  •  flag
Share on Twitter
Published on February 01, 2017 21:59

Sathyajith Bhat's Blog

Sathyajith Bhat
Sathyajith Bhat isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Sathyajith Bhat's blog with rss.