Front-end frameworks

I’ve been working with javascript for years, and I am well used to working neck-deep in the DOM. I know how to make elements on a page bend to my will, but setting up a solid structure that works well and fast isn’t exactly easy.

One project that I had fairly recently was to create a unique navigation element from a deeply nested list, and my first thought on it was to simply get in and copy the structure in the DOM, and manipulate it based on what was there. As time went on, the structure started getting more and more complex, and with all of the manipulation that I was doing, I just didn’t like my solution, and so I refactored it to something closer to a MVC model. This was much easier to manage, as I could easily model the structure and data, and then I could create the HTML elements as necessary based on the view that the user was currently in.

That’s generally how projects like that go. You go with a simple and direct solution, but once you get more complex, you need to add more structure to keep things better managed, and with fewer side-effects.

This is where front-end frameworks like Ember, Angular, and Backbone come into play. Each of them have their own style, but they all follow a similar concept of having models that control the structure of the data, views that control how data is displayed, and controllers that link the two, so that when a user changes state, the controller calls the model for an update, which is then passed back to the chain through the controller to the user’s view.

It’s still all very strange to me, as it abstracts you away from the DOM, but that said, rendering data to a template it much easier and more succinct than building up elements either via string concatenation or element construction.


Challenge Accepted! Flattening an Array

As part of the Onramp program at Mobiquity, out awesome instructor, TJ, was going over some of the basics of javascript and jQuery today. As we had some extra time at the end of the day, he did some challenges for us, which I was able to accomplish no problem. The last one though, was a little more tricky. The setup is to make a recursive function that will flatten out an arbitrary array with all of the values in the array. For instance, it should turn this:

[
   1, 
   [2],
   [
      3,
      [4]
   ],
   [[[5]]]
]

into this:

[1, 2, 3, 4, 5]

I was able to come up with a solution in class that utilized two input parameters. The first one was the array to be flattened, and the second was a sub-array that it should be added to. I wasn’t satisfied with that solution, but I couldn’t get it in the 10 minutes that I had after I had ‘finished’ the assignment. I talked with TJ about it, and we couldn’t get it to work.

So, as I was laying in bed, this problem gets stuck in my head. I know that I can do this, and I almost had it, but I just couldn’t get it to work. It was spitting out an empty array. Then, it hit me what I was doing wrong. In this case, I was using the Array.prototype.concat function as if it changed the array on which it functions instead of creating a new one. Once I realized my error, the solution fell into place.

var complex_array = [
       1, 
       [2],
       [
          3,
          [4]
       ],
       [[[5]]]
];

var simple_array = [3];

function flatten(arr) {
    var output;    
    if (!(arr instanceof Array)) {
        output = arr;
    } else {
        output = [];
        arr.forEach(function (value) {
           output = output.concat(flatten(value)); 
        });
    }
    return output;
}

console.log(flatten(complex_array)); // [1, 2, 3, 4, 5]
console.log(flatten(simple_array)); // [3]

Success!

I know that there is certainly room for improvement, but I’d call this a pretty good solution.


Mobiquity: Week 2

My first week is over, and my next one is just a day away. Already, I have learned a ton, and I’m really looking forward to next week. Hopefully it will be as productive. I also hope that I can get more sleep than last week, but in either case I’m looking forward to it.


Mocha

Mocha is a fairly simple automated testing platform. It runs on Node.js, and it has two main functions: describe(), and it(). You can use any assertion platform that you would like within it. describe() acts as a grouping mechanism for a suite of tests, and it() actually runs the tests. This sounds weird until you see it in practice, and so, “On with the show!”

describe('A user', function () {
    it('should be able to log in to the system', function () {
        // Test code.
    });
});

It’s simple so that it doesn’t get in the way, and almost self-organizing. It can run synchronously or asynchronously. You can control which tests to run in your suite, and if that wasn’t enough, I dare you, the next time you run your test with mocha type the following:

$ mocha --reporter nyan <file_to_test>

You’re welcome.


Grunt

Introduction

I’ve been very interested in Foundation since my last job with Alachua County, where we used it on two of the main projects that I was on. So interested, that I plan on helping with it’s development. Unfortunately, the development documents are woefully out of date (as of the time of this post, the last time that CONTRIBUTING.md was updated was seven months ago). So much so, that it’s from a previous major version of Foundation. Useless. As a result, I’m going to need to learn about their tools on my own. One of the first that I encountered is Grunt, and that’s what I am going to go over here.

What is Grunt?

Grunt is an automation tool that allows developers to automate tasks such as running test suites, linting, minification, compilation, and pretty much any other task that you may need to run every time you make a change to your code, or just don’t want to have to do every time. It’s build on Node.js, so it uses javascript as its syntax. Since I enjoy writing javascript, this is right up my alley!

Installation

Most or all of this will be on Grunt’s Getting Started page, but I’m going to try to distill it down. Follow along if you want. Before we get into how to use Grunt, we need to install it. The only requirement is to have Node.js installed, which you can get from their website. Simply download the package, and run the installer. Not command line needed a this point. I hope that you’re still with me, because now we’re going to be in the command line a lot more.

Once Node is installed, we can actually install Grunt. To do that, we’re going to use the Node Package Manager (npm) by running the following command in Terminal (I’m using a Mac, so if you aren’t, some of the details may be slightly different):

$ npm install -g grunt-cli

Got that? This installs the Grunt command line interpreter globally, and then we install a local copy of grunt in the project that we are working in. If the project is already set up with a package.json file and a Guntfile.js, like Foundation, then you can simply run the following command, and it will install all the necessary dependencies:

$ npm install

Setting up package.js

package.json is a project metadata store, that will store things like the project name and dependencies. Luckily, you don’t need to create this by hand, as there is a handy tool called grunt-init that will set this up for you based on templates. To install that, simply run the following command:

$ npm install -g grunt-init

This allows you to install templates to your projects by simply typing:

$ grunt-init TEMPLATE

For instance, if you want to install the jQuery template with:

$ grunt-init jquery

Templates are installed in the ~/.grunt-init/ folder, and your templates should be a sub-folder of that. For instance, the jQuery can be installed with the following command:

$ git clone https://github.com/gruntjs/grunt-init-jquery.git ~/.grunt-init/jquery

Once your package.js file is set up, you can install packages and have them added to the project dependences by running the following command:

$ npm install <module> --save-dev

In our case, let’s install Grunt:

$npm install grunt --save-dev

Gruntfile.js

This is where the magic happens. As grunt is a Node.js extension, it uses the Node conventions. All of your Grunt code needs to go within the Grunt wrapper function in the following format:

module.exports = function (grunt) {
    // grunt grunt grunt
};

One thing that you will typically do to start a Grunt project is to run the grunt.initConfig method with a configuration object. This can be read in from the package.js file in the following way:

module.exports = function (grunt) {
    grunt.initConfig({
        pkg: grunt.file.readJSON('package.json')
   });
};

Templates

Once this is set up, you can reference the pkg data within other properties with templates. Any property that has the <% %> in the string will be replaced with the value inside of it. For instance, a common task would be to concatenate several javascript files into a single file. We can do this with the grunt-contrib-concat template. First we need to install the template, which we do from the terminal prompt with:

$ npm install grunt-contrib-concat --save-dev

Then we need to run

module.exports = function (grunt) {
    grunt.initConfig({
        pkg: grunt.file.readJSON('package.json'),
        concat: {
            dist: {
                src: 'src/**/*.js',
                dest: 'js/main.js'
            }
        }
        banner: '/* --- <% pkg.name %> --- */\n',
        footer: '/* --- End <% pkg.name %> --- */\n\n'

    })
}
// Load the plugin that provides the "concat" task.
grunt.loadNpmTasks('grunt-contrib-concat');

// Default task.
grunt.registerTask('default', ['concat']);

This would read in each file listed in the package.js file that has the name property, and display concatenate it with all of the others as main.js in the js folder, whenever grunt is run in the folder in which the Gruntfile.js file is located.

Conclusion

Basically, Grunt takes the grunt (hur hur hur) work out of building web apps. Its simple syntax, and powerful features mean never having to do the same thing twice in your project again.


It’s Javascript All the Way Down

As a developer, it’s sometimes difficult to keep up with current trends. When you work with an organization that works with a certain set of technologies, you tend to focus on that, and so some of the other ones get put in the back log. Starting at Mobiquity, we’re learning about all of the latest technologies that are used in the industry. It’s fantastic, because these are all of the things that I have been wanting to learn, but I haven’t yet had the opportunity to learn.

Today was a bit of an overview of some of the things that we will be learning in the coming weeks. We went over Node.js, MongoDB, and Grunt in class, and we’ll be posting on our blogs about some other things this weekend. (Look for one on Grunt and Mocha from me very soon.) The thing that amazed me was just how far javascript has come.

I’ve been a web developer for years. I’m used to jumping between languages to accomplish running server-side code in PHP, C#, or Java, and then databases in one form of SQL or another, then javascript (maybe a little Coffeescript) for the front-end scripting, shell scripting for automation, HTML for content, and CSS/Sass for display. Working between them has always been very natural for me, but it can get a bit much at times. Now with these technologies we can simplify this to javascript for the server, automation, database, and still the client-side scripting. The only thing that we have left is HTML and CSS/Sass.

It makes me wonder how long CSS will last. There are some libraries that have been written to create CSS through Javascript, but none of them are quite mature enough to use in a production environment. That’s one of the few places that’s left.

That said, I could see JSON being a basis for web pages, as well as for the display of them. After all, it’s a tree structure, and it’s ultimately just properties. The browsers already know how to interpret them, and we already have the DOM. Why do we bother with HTML? Why not simply serve up a fully-rendered DOM straight in Javascript? It would all still work just the same, as the browsers already do this after reading in the HTML.

Taken to an extreme, Chrome has a just-in-time compiling of Javascript, and we have Chrome OS. Who knows how far this rabbit hole goes. I’m certainly glad that I have spent as much time on Javascript as I have. It only seems to be getting to be more and more relevant.

Something to think about…


Agile Pizza

I haven’t been feeling well, but my training is going great. I am becoming more familiar with git, and I believe that I have our Mobiquity Agile process down. One of the exercises that we did to learn our process was what they have termed Agile Pizza. Basically, we made three pizzas using the Mobiquity methodology (which is a Scrum method).

Agile Pizza Wireframes

So, here’s the set up. We were given three big cards which acted as our wireframes, and a group of note cards with user-stories on them. We were told that we could use Google as our technical document. We start by examining our user-stories, which feature sentences such as, “As a tomato, I should be sliced into discs,” and, “As pizza dough, I should be crispity crunchity.” We were told that we needed to complete the task in four thirty-minute sprints.

Sprint 0

We started our Sprint 0 phase by grouping them roughly into a few sections. One for the crust, one for the sauce, one for the toppings, and one for the specific pizza builds. Next we located all of our ingredients and tools. We ran into some limitations (dull knives without a sharpener, no onion powder, only one pan on which to make a pizza, etc.), but we incorporated them into the challenge. We set up each station around the kitchen, and once that was complete, we went about a rough plan on our four sprints. Since making the dough and and sauce seemed like the most difficult and time-consuming blockers, we decided to tackle them first, and because we only had the one pan on which to cook the pizzas, we decided to sauté the onions, peppers, and mushrooms, as well as pan-fry the sausage. Really, the bulk of the work.

Sprint 1

Taking care of business

We start Sprint one with a planning session. We go through each of the user stories, and write out the tasks that will be necessary to complete them on the back of the card, and afterward we play planning poker by ranking the difficulty of each story as 1, 2, 3, 5, or 8. Right after, we worked out a general plan on who would be assigned what tasks, and since I was the only one that had watched enough Food Network to know what a chiffonade was, I took cutting vegetables.

Once the clock started, I helped with washing a few peppers before carrying them off to be sliced. Then I worked on the tomatoes, basil, and onions. Unfortunately, I made a bad assumption that someone would bring me the onions, while I was working on the basil. They needed to be cut before the veggies could be sautéd, and so I had to switch to them and back. Josh was awesome, and as soon as he was done washing, he was right with me cutting like a fiend. Moving around was tough, and at first the team leaders were in the kitchen watching, and I kicked them out, as they were getting in the way.

It really felt like one of those cooking competitions on Food Network, and I assumed that roll. I called out things that I needed, and safety instructions as I moved about with sharp or hot things. I was a bit worried that I was coming off as bossy, but I made sure to check in with my team. I called out for a time update several times, and I couldn’t get and answer, so I just kept to my tasks making sure to not completely destroy my veggies with the dull knife or slice my hand open. As time went on, my user-story cards started to fall as I called, “QA!” to get approval, and officially knock out the card. Each felt like a small triumph (burndown, baby!), and despite the small setbacks, when time was called, we had three bowls of pizza dough, cooked sausage, cooked veggies, sliced tomatoes, and sliced basil. We’d done it!

Sprint 1 finished up with a showcase of user-stories that we completed. I was surprised that we really had done it. We worked well as a team. We followed that up by a retrospective, where we reviewed all the things that we did well, all the things that needed improvement, and suggestions for improvement.

Sprint 2

The Dreaded T

Looking at the T was intimidating. It was such an odd shape we weren’t sure how to do it at first. Again, we went through our planning phase, this time focusing on finishing pizza three, while preparing the crust for pizzas one and two. Again, we also wrote out the tasks and played planning poker to assign the story points.

Back to action! This time, I was on the crust. I was trying to make the circle by hand while Laura used the one roller to make the rectangle. The plan for her was to just make a general shape, and we will trim it down to the right shape afterward, but her circle was so good, and mine was not doing nearly as well, I suggested that we switch which one would have the target shape, and so I made mine more like a rectangle. We made a great team. While we were doing that Kushagra was working on the T, and he basically made two rectangles, and over-lapped them. We had trouble getting them to join until someone had the idea that we wet them, and once I rubbed them with a little bit of water, it worked like a dream.

Finished ingredients waiting to be a pizza

Pizza three was assembled, and we put it in the oven. It was a bit heavy on sauce, and I was worried that it wouldn’t get a crunchy crust. Once the pizza was in the oven, we had some time on our hands, and we got drinks and milled about passing time. Without an extra pan, we just couldn’t do anything with the other pizzas. While it went over thirty minutes (pizza takes 20 minutes or more to cook in that oven), that was determined to be acceptable, as long as they were mostly done.

At showcase, it was a really nice feeling to have a pizza completed, and it looked delicious. In the retrospective, we really didn’t have much to talk about needing to improve. We worked great together, and we wondered if we could finish one sprint early, but since we only had the one pan, that just wasn’t possible.

Completed Pizza 3It was hard to get this thing out of the pan!

Sprint 3

All downhill from here…

Sprint three went much like Sprint 2, except we had even less that we needed to do. This time I opted for less sauce, and while some of my teammates were worried that it would be harder to do, since we had an issue, I figured that I would just use my hand, and worked out pretty well. Thirty(ish) minutes later, we had pizza!

Pizza 2 complete

Since we had demonstrated that we had the showcase and retrospective (and also likely because we had drank a few more beers), we sort of phoned-in those parts.

Sprint 4

All over but the eating.

I can’t really call this a sprint. I don’t even know if we bothered with the user-stories. At this point, it was just put the pizza together. Put it in the over, and wait. We didn’t even bother with the vestiges of a showcase or retrospective, except as we admired our results and dug in.

IMG_2331

Retrospective

Holy crap, it’s 1:42 a.m.!

This was such a great event. Not only did it help solidify the Scrum process, but it was fun, and it really helped us come together as a team. I’m proud to be working with the people that I am. It helped build trust, and now I know how to make a pizza from scratch!

Thanks to, Andrew, Chris, Daniel, Elise, Firoz, Kushagra, Mike, Laura, and Josh. I’m proud to be a Mob-Spider!

Mob Spider ShirtOkay, one more picture for the selfie in the team shirt.

  • IMG_2308
  • IMG_2329
  • IMG_2328
  • IMG_2327
  • IMG_2326
  • IMG_2325
  • IMG_2324
  • IMG_2323
  • IMG_2322
  • IMG_2321
  • IMG_2320
  • IMG_2319
  • IMG_2318
  • IMG_2317
  • IMG_2316
  • IMG_2315
  • IMG_2314
  • IMG_2313
  • IMG_2312
  • IMG_2311
  • IMG_2310
  • IMG_2309

[]: http://ustice.net/blog/wp-content/uploads/2014/02/IMG_2306.jpg


First day on the job

Today was my first official day with Mobiquity. I’m really excited to be working with these people. Today’s Onramp topics were mostly about how Mobiquity is structured, and how they do business. We also went over User-Experience-Driven Design, and an intro to git.

While I am not going to be working too closely with the UX team on a day-to-day business, I’m really interested in how they work. I plan on reading up on their work, and trying to keep in contact with them. I love the idea of being a customer advocate, and how they humanize the end users of the products that we build.

Once again, Google Docs came in super handy. I have never used the presentation doc type before there, but I got to today, when I collaborated with Laura on a presentation on committing code to a git repo. We were able to work directly together easily, and when it was better for us to split slides and work independently, we were able to without getting in each other’s way. This includes when adding and deleting slides. The experience was seamless, and I was impressed with what we were able to do in a short time (only an hour to research, prepare, and demonstrate).

I’ve also started to take on the role as the team curator of the wiki. I just *hate* passing links via email. I much prefer having a place to look for things like that, in case you miss it.

After work, we went to Liquid Ginger, and had a wonderful meal. I especially liked the choice because we were downstairs, and as a result the place was quiet enough to hear people on the other side of the table for once.

I feel happy with the choice that I made coming to work here. I feel like I’ll be doing very good things.


Preparations for my new job

I’ve spent a large portion of my weekend preparing for starting my new job tomorrow. I read the entire book (really an ordered set of loose cards) on Agile Programming. It’s mostly a review, as I had previously learned about Extreme Programming in the past, and that is what Agile is built on.

I’ve also installed just about every app and tool that I can think of that I’ll use, and most of the ones that they have suggested. I’ve brushed up on git and polished up my blog theme and updates.

I can’t say that feel ready, but I’m ready enough.


Hello, Mobiquity!

Reintroduction

It’s been a while since I have posted anything on this blog. Lots has happened since I last posted here, but with Twitter and Facebook rising, I stopped because of the overhead of recording events, but I just started a new job with a company called Mobiquity, and they encourage their developers to keep a blog, and so I am going to use this opportunity to reintroduce myself, and where I am in life right now.

The early years

I’ve been programming computers most of my life in one form or another. I’ve always been fascinated by puzzles and the creative process. My first introduction to programming was a version of BASIC on a TRS-80 that my dad had as a kid. I remember that the computer didn’t come with a storage device, and so whenever you turned it off, it lost everything. As a result, dad had a few programs written down that we would play with together. The one that sticks in my memory was a simple lunar lander game in which you controlled the ASCII character lunar ship with thrust in either down, or a lateral direction. The challenge was to get to a flat location with a velocity below a certain value before running out of fuel.

What was interesting to me about this was that there were certain values that I could adjust that would have an effect on the game. I could change the gravity, starting fuel, thrust power, and later terrain generation, and I would be able to see how the results directly in the game. Later in school, I got to play with LOGO where I could make all sorts of shapes with a few simple commands. Later on an Apple IIgs, I really started learning about logic structures (loops, conditional switches, etc.) What really inspired me though was once my family purchased our first full computer, a 386 running MS DOS. Since I had gotten used to being able to write little programs, I was quick to find QBASIC on the computer, and start playing.

Unfortunately for me, the Internet still wasn’t something that people had access to outside of a University at this point, and all I had was the help files, and later a book that my parent’s bought me. I learned how to solve problems, and to break larger problems down to a flow of smaller problems. I made toys and games mostly. As I gained experience, I tackled more complex problems, and began branching out into Batch Script as well so that I could access my creations from a command prompt or menu.

In high school I started seeking better sources of information, and I took classes in Visual Basic and C++. I loved working with Visual Basic because it was so easy to create a GUI, when for so long I had been limited to the black background of the command prompt. Suddenly a whole world opened up to me. No longer was I limited to self-made menu-driven systems, but now I could make full and robust GUIs with a simple API.

Getting serious

College brought more formal training. I studied more languages including Java, PHP, and Javascript. While I would make smaller stand-alone programs, increasingly my interests turned to web-based applications. My senior project was a Java/Javascript WYSIWYG HTML editor, and this before CSS, so all of our pixel-perfect positioning had to be done in HTML tables (including supporting nested tables). (Present me cringes at the implementation now.)

Programming as a career

During school, I landed an internship with AEgis Technologies, where I really learned about development in a professional environment. I also learned the value of having colleagues with different information and experience than my own. It was here that I started learning C#, and I got to help with QA as well. Unfortunately, nothing lasts forever, and I wound up being laid off. This derailed me for a while, although I didn’t give up my passion.

I worked a few jobs after this point, some off which were in the technical field, although they weren’t software development, I did learn more about the systems that I was working on. My first professional job started in 2005, with Van Gogh Vodka. It was here that I found my niche in web development. I worked with other developers and designer to create products for the company, whether they were one-off sites, or whole new applications. We were a small company, with a small team, but we were able to accomplish a lot in the time that I was there. Even still, the majority of what I did there was more support.

In 2012, Van Gogh Imports decided to close their Orlando office, and so once again, I had to venture out to find a new place to work. This seemed like a tragedy at first, but it really set me on a much better path, as my software development skills were under-utilized, and I was worried that they were atrophying. I settled on Alachua County Board of County Commissioners, as a Systems Analyst, and started as what has been my favorite job up to this point in my life.

Professional Development

Alachua County really has wonderful people working there, who really love their jobs. My team and I managed the county’s web presence, and I came in at a great time. I got to be a part of the county’s new website development process almost start to finish. Our task was pretty monumental: to marry Sharepoint 2010 (a powerful platform built on the web of yesteryear) with Zurb’s Foundation platform (an open-source web platform built for the web to today and tomorrow), while building our own brand identity. I’m so proud of what we did, and I look forward to seeing the site finally go live in the coming months. (Hey, mom. You see that navigation menu up top? I made that!)

In the year that I was there, I solidified my interest in web development. I just love the accessibility of it. I can create something, post it to the web, and in order for someone to use it, all they have to do is open their browser and click a link. That just amazes me, even still today. I’ve seen javascript go from being a tool only used for form validation into a full-fledged development platform. I’ve seen start of CSS, and now it’s preprocessed extensions like LESS and SASS.

The Road Ahead

Now I embark on a new journey. I am leaving my friends at Alachua County and joining a new family with Mobiquity, where I am excited to be working along side talented developers to create world-class mobile applications. Whether they are native apps, or running in the browser, I’ll be creating software that people will use in their daily lives. This sort of environment is new to me, and I’m looking forward to meeting the challenges ahead of me.

This is the first time that I have started a new job with confidence. I know that I am a skilled developer. If you need something done in javascript, I can do it. If there is a new technology that will help me do my job better, I can learn it. I can’t wait to see what I will accomplish at Mobiquity!