Each time I install the latest version of MacRuby, I spend an hour re-figuring this out, so here it is…
Under certain conditions installing the ‘sqlite3-ruby’ gem on OSX with macgem fails with this error:
Building native extensions. This could take a while... /bin/sh: line 1: 27196 Abort trap /Library/Frameworks/MacRuby.framework/Versions/0.11/usr/bin/macruby extconf.rb ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. /Library/Frameworks/MacRuby.framework/Versions/0.11/usr/bin/macruby extconf.rb checking for sqlite3.h... yes checking for sqlite3_libversion_number() in -lsqlite3... no sqlite3 is missing. Try 'port install sqlite3 +universal'or 'yum install sqlite3-devel' and check your shared library search path (the location where your sqlite3 shared library is located).*** extconf.rb failed ***Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir ... etc .... /Library/Frameworks/MacRuby.framework/Versions/0.11/usr/lib/ruby/Gems/1.9.2/ gems/sqlite3-ruby-1.3.2/ext/sqlite3/extconf.rb:20: in `asplode:': sqlite3 is missing. Try 'port install sqlite3 +universal'or 'yum install sqlite3-devel' and check your shared library search path (the location where your sqlite3 shared library is located). (SystemExit) from /Library/Frameworks/MacRuby.framework/Versions/0.11/usr/lib/ruby/Gems/1.9.2/ gems/sqlite3-ruby-1.3.2/ext/sqlite3/extconf.rb:29: in `<main>'\
The problem is that macports installs into /opt/local by default so even after doing ‘port install sqlite3 +universal’, you’ll get this error message. You need to specify the install prefix using this awkward command line:
sudo macgem install --version '= 1.3.2' sqlite3-ruby -- --with-sqlite3-dir=/opt/local
Version 1.3.2 is the one I’ve had luck with on MacRuby up to version 0.11.
Most of the other large software vendors also have a cloud story, but only their captive customers care.
Various others, some of which are more in the Infrastructure as a Service space than Platform providers e.g. hosting plus some Content Deliver Network (CDN) capabilties, and other focussed on specific app building tools and scenarios:
NoSQL Comparisons & Overviews
Performance Comparison (Tokyo Cabinet, Berkley DB + Memcache, Voldemort + Berkley DB, Redis, MongoDB)
- Get the script object via the callback method. Many tutorials show obtaining it via [webView windowScriptObject]. This does not always work since the script object may not be ready (e.g. the page isn’t fully loaded).
- Take note of which delegate methods are static (isSelectorExcludedFromWebScript) and which are not.
- For no-args methods, everything works smoothly as long as you use all lowercase ruby method names with no underscores.
- If you want to pass an argument, you need to call it from JS with an underscore, declare it in macruby without the underscore, and also register it via ‘respondsToSelector’. To summarize:
- Define method in macruby: mymethod(somearg)
- In the webView initialization on the object that contains mymethod(somearg): self.respondsToSelector(‘mymethod:’)
- All of this confusion seems to arise from translation between JS methods, selectors, and strings/symbols in macruby. The colon at the end matters and it doesn’t work if it is a symbol, :’mymethod:’. For no-args methods though symbols work just fine.
- It is handy setting up the delegate methods to trap the console.(log|error|warn) methods as well as window.status changes.
Update: found this related article from the merbist.
There are three broad approaches to communicating version information:
In the URL
This is the most common approach. Simply add an identifier, usually near the root of the URL and version the entire set of resources below it (and hence the entire API).
This technique does a bulk versioning of the entire API and suggests that you shouldn’t mix resources across API versions. It is analogous to traditional API releases by sending out a new library versions. Bits within the old version (classes, data structures) are not intended to work smoothly with the new version. FWIW, this approach is common and well understood.
In the Media type
This seems to be the most RESTful to me, but hasn’t been widely deployed yet.
There are other possible variations that change the scope of the versioning:
It feels a bit awkward when using content type degradation conventions:
In general, the whole idea of extending mime-types to make them more flexible seems necessary but also limited. That little string simply can’t scale too far. What if you want to have a vendor specific type that also happens to follow some xml standard. Can you subsume that XML standards mime-type which may also have a +xml at the end?
In the Content
This is how the human driven web currently works. The content is returned with an un-versioned media type, usually from an un-versioned URL, and the handler of that media type needs to sniff the content to figure out what version was sent back. That is why we have HTML content declarations and a convoluted set of rules that are different for each browser on how to handle combinations of content declarations and browser versions (quirks mode!). In general, it works for the simple case but is difficult to manage when things get complex.
My preference right now is to define a small to medium number of media types and have them versioned independently of the resource space used to access them. Exceptions to this are when the media types are tightly related and are likely to all be consumed as a whole anyways. In this case, versioning the whole API offloads the dependency tracking from the consumer and guarantees them a complete, cohesive API. However, when this happens you should consider whether that coupling is truly necessary in the first place.
Software project estimation is hard. In fact, it is so hard that estimating within the accuracy most people expect is actually impossible. To get as accurate as humanly possible, read McConnell’s software estimation book, collect your own metrics, and then carefully and critically apply the principles. If you just need to get a quick order of magnitude check, here are some heuristics and techniques for a bottom up approach based on estimating code size.
The basic numbers are:
- 5-20 LOC per developer per hour
- 2000 person hours per year
- 50 LOC/class (Java), 100 LOC/class (C++)
This method uses objects as a proxy for size estimation. You need to supply the number of objects in the target software and out pops the magic number. The two dominant variables tend to be the the number of objects (obviously) and the LOC per developer per hour. The second can often be pulled from historical data. I tend to measure the start when developers are first engaged in serious coding, skipping the early requirements and visioning, and the end when the code is running, unit tested, and lightly functionally tested i.e. DCUT code (Design-Code-Unit Test). For some teams this alpha, others beta, and others Running Tested Features. However you do it, try to find reasonably consistent points and make your historical measurements.
If you have no historical data, here is a rough continuum:
- 25+ LOC/person/hour — prototypes; small trivial projects
- 20 LOC/person/hour — small, 2-3 person team with fast micro-requirement turnaround (e.g. onsite customer, or more commonly, the developers are able to fill in many of the details of the requirements)
- 10 LOC/person/hour — regular agile team building a non-trivial app
- 5 LOC/person/hour — typical enterprise development pace
- 1-3 LOC/person/hour — stringent or archaic, unproductive environments (e.g. banking software); you’ll see this in some historical literature, but they are often taking into account the time beyond DCUT
Pick one that seems to fit your team size and environment. Don’t be too optimistic. How big is your team size? Is it a prototype? Do you have to worry about localization, security, scalability? How familiar is the team with the languages, frameworks, and tools?
The 2000 person hours per year is just a shortcut to take care of holidays, sick days, bathroom breaks, and other daily down time. Also known as non-ideal programmer days (hours).
Now the hard part. How do you figure out the number of objects or lines of code in your future software? The easiest way is by analogy. Find a similar project that either you’ve done or someone else has done. There may be some open source projects that cover some of your project scope. If so, take a look at their code bases.
Barring that, you’ll need to do some high level design in order to start figure out how big your code will be. Knowing how many layers your architecture will have and which frameworks you’ll be using is important. More layers tend to add more code. Frameworks often provide design constraints that you can use to start to enumerate the scope of the code — count the number of services, commands, or functions. Database tables and screens are also good proxies for code size estimation. If you already have a database schema, how many objects will be needed to wrapper it? Will there be a separation of data objects and domain objects?
Screens tend to map to template files, controllers, views, model proxies, etc. If you have both a existing database schema and requirements that map out screens, you should be in pretty good shape. If you have a pure codebase with no external anchors such as screens, database tables, web services to process, or transactions to fulfill, you may want take a different approach.
Once you estimate out how many objects it is just a matter of multiplying out the Objects * LOC per object * LOC per person per hour to get the total person hours. Multiply by 2000 to get the person years.
Now take a look at the software estimation cone of uncertainty and realize your error bars are probably worse than +/-100%. Still, it is better than nothing at this point. Ideally, you should use this technique along with a couple of others, such as a top-down work breakdown structure, gut checks with a few team members, and/or high level epic estimation via planning poker. Multiple techniques done independently (don’t taint each other!) are more powerful than one expert judgement.
Note this number does not take into account non-code related and other project related costs. Designing the database, setting up build machines, project management, and high level requirements definition should be estimated separately.
 Software Estimation, Steve McConnell. http://www.amazon.com/Software-Estimation-Demystifying-Practices-Microsoft/dp/0735605351/ref=sr_1_2?ie=UTF8&s=books&qid=1275355156&sr=8-2
 A Discipline for Software Engineering, Watts S. Humphrey. http://www.amazon.com/Discipline-Software-Engineering-Watts-Humphrey/dp/0201546108/ref=sr_1_4?ie=UTF8&s=books&qid=1275358404&sr=1-4
Applications that have both a desktop and a web version, often for online/offline use cases, force you to make a decision about whether you want to share or duplication the back-end code that isn’t dependent on the UI bits. If you have a nice, separable back-end engine component it is tempting to architect your application as two separate distributed components and treat the local configuration as a special case of that.
For example, when your core application logic is layered behind a REST API, why not just have mini-web server running on the client for the the desktop deployment scenario? The UI layer can manage this process transparently to the user.
The benefit is a single architecture to cover both the web and the desktop deployment scenarios — the trade-off is that you are building some fundamental latencies into your architecture. To put the costs in perspective, I tried to google up some rules of thumb on the typical latency of various types of calls. Here is a rough approximation of the relative costs:
RESTfully modelling transient resources, events, collections, and other application facets can be difficult. The post “Square Peg, REST hole” nails it and has an excellent discussion in the comments section. While I am a fan of REST and have been following the “web architecture friendly web services” debate since before dissertation-REST existed, what has become clear over the past few years is that:
And yes, hackable URLs are not a part of REST, but they certainly are an integral part of the success of web architecture friendly web services in the real world. REST wouldn’t be winning over SOAP if it wasn’t for all the successful semi-RESTful APIs that developers found far more intuitive and usable. So if you model your Resources right and people can intuitively GET them, you probably don’t need to sweat the rest of the details.
A corollary to this is that if the majority of your application doesn’t involve getting resources that are at least in the granularity ballpark of a document, then REST may not be that important to you.
Communication efficiency on projects is intrinsically linked to distance. Consider the following scenario: you have a blocking issue that can be answered quickly by the right person. You’re not exactly sure who the right person is but you know roughly the team or group you should talk to. It is a bit tough to describe in an email, so you need to articulate it verbally, or even better, with some drawing and wild gesticulating. The other group is:
a) Your immediate team. You lean over to the person next to you and start describing it. Within 30 seconds, they probably know if they are the right person or it is someone else. Bonus: ‘accidental broadcast’ — if you are sitting close together one or two other teammates probably overheard and may chime in. Within a 3 minutes, you have your answer and everyone is back to work.
b) Another team down the hall or on another floor. You’re not sure what their schedule is, so instead of walking over and risking the key person not being there, you book a meeting at least a couple of hours in the future just to be courteous. The meeting time-slot is at least 1/2 hour long since Outlook defaults to that. 3 hours later you have your answer.
c) Another team across the Atlantic/Pacific. You have a one or two hour window the next day that is a good conference call time for both parties. If that isn’t open, you try the next day. After the call, inevitably, there is a dangling thread so you send a quick clarification email. You get an answer the following day, unblocking you. Total calendar time: 3 days.
There are in-between brackets too. Someone who is on your team but not sitting next to you is more likely 30 minutes than 3 minutes (“I’ll wait until I get a coffee to swing by their desk”). Someone only three timezones away is likely 30 hours rather than 3 days.
But that is not the worst part. If anything goes wrong in the communication or clarification goes beyond a quick follow-up email, the communication delay typically jumps up to the next bracket. 3 hours turns into 3 days and 3 days quickly turns into 3 weeks.
At a large company with widely distributed teams three week delays happen all the time. A conference call requires a follow-up or two, a key person is on holidays for a week, people are booked during the two hour timezone window until next tuesday, and so on.
The important thing to understand is that *there is no fix*. Consider it a fundamental latency attribute of the medium. Timezones, the lack of face-to-face communication, the inability to “instant interrupt” for minor issues, less awareness of people’s “micro-schedules”, and other practical issues such as room bookings and the dreaded 15-minute-delay-while-the-organizer-fights-the-web-conference-software all conspire to alter the bandwidth of the channel.
Higher average latency and less bandwidth means people will default to trying to solve issues themselves when it could have been solved more efficiently with input from someone else. If every person on a ten person team needs to communicate on a weekly basis with someone far away, that adds up to a lot of friction.
What can be done is to organize in such a way that you don’t need to communicate across that channel as often. Conway’s law is a reflection of this. The small, co-located teams recommended by Agile methods like Scrum and XP are a realization of this. They advocate re-organizing the project backlog and/or the teams so you can avoid having to communicate across slow, thin pipes. Break up into mostly independent sub-teams. Invest in the up front retraining or knowledge transfer to make it possible.
When those 3 day or 3 week delays happen repeatedly, instead of creating more processes to improve the communication or looking towards to technology (video conferencing!) to solve the problem, try to figure out how to avoid the need to communicate in the first place.
A co-worker recently asked me what to learn and what to watch out for when starting iPhone development. There are key decisions that you need to make early on that can make or break your project. I’m also thinking about porting my existing iPhone app to the iPad, as well as writing a new app, so now is a good time to revisit those early decisions and start on the right track.
The first decision point is native vs. non-native. Native iPhone apps are written in Objective-C and use the iPhone SDK Cocoa Touch libraries. Non-native apps can be written as web apps running in the Safari browser, developed in other development platforms like Flex and compiled for the iPhone, or developed as hybrid web/native apps using a bridging framework such as PhoneGap.
If you don’t know Objective-C it is tempting to make the native/non-native decision based on your current skill-set, but that is shortsighted. If you want to make a quality app you need to understand if your app really demands native integration or not. Obviously, if you’re doing a game or anything with significant graphics you must do a native app (although in the case of games they are often mostly C or C++, which runs fine on the iPhone anyways). But even if your UI just has complex interactive screens you’re probably going to wish you had gone native.
However, if your app is complex then you will thank yourself later if you just bite the bullet and learn Objective-C and Cocoa. In this case, there are a few considerations:
- Interface Builder (IB). How should you use it, if at all? Options:
- Don’t use it at all, create all views programmatically.
- Only use it to layout major screens, do event manipulation in code.
- Go all in and do UI layout, properties, event wiring, etc. in IB.
- Programming style. Stick to Objective-C 2.0 unless you have prior experience using older idioms.
- Watch out for out-of-date tutorials on the web that contain mixed programming idioms.
- Memory management strategy. Think about your window and resource ownership structure. Stay consistent!
Doing all of the UI programmatically can lead to huge amounts of unreadable code and cut’n’paste setting of properties. But the result is very predictable. You know exactly what is going to happen and where things are set. Here is a brief example setting up a label:
UILabel *myLabel = [[UILabel alloc] initWithFrame:CGRectMake(50, 100, 200, 100); myLabel.backgroundColor = [UIColor grayColor]; myLabel.font = [UIFont fontWithName:@“Ariel” size: 14.0]; myLabel.shadowColor = [UIColor grayColor]; myLabel.shadowOffset = CGSizeMake(1,1); myLabel.textColor = [UIColor blueColor]; myLabel.text =@“label text”; [self.view addSubview:myLabel];
Not too bad, but now consider a medium complexity screen may have a half dozen widgets on it. My current preference is to do screen layout and properties in IB, but to do all events and manipulation in code. This allows me to layout complex screens with multiple widgets and images visually while still not relying on IB too much. The downside is it can lead to some inconsistency. For a trivial screen with one image and a button (e.g. a tutorial screen) it seems pointless to create and manage a XIB file. But if you create it programmatically then some of your screens are represented in IB and others are not. Overall it seems worth the trade-off — all of the important screens can be mocked up and rapidly edited in IB.
Programming style is another topic to pay attention to when you are in the learning stages. There are many tutorials on the web, but not all use the same idioms. It is easy to learn from multiple sources and end up with a mixed style that can lead to inconsistency resulting in bugs. In particular, pick an approach to handling properties and stick with it. Objective-C 2.0, introduced not too long before the iPhone was first released, includes features to create properties declaratively. Unless you know what you are doing, learn this well and and stick with the 2.0 conventions.
Objective-C is the type of language that greatly benefits from consistent coding style, particularly when it comes to memory management. Your approach will depend greatly upon the application you are building, but similar to C++, the main thing is to think it through in advance. Don’t expect to do it as go you and refactor on the fly. Think through which resources are memory intensive and which are not and plan your memory management strategy around the memory intensive resources.
So, in a nutshell:
- If you are doing a non-trivial app and you care about quality, strongly consider doing it natively in Objective-C.
- Interface Builder is great for layout and setting properties on complex screens. Use it for that but do everything else programmatically. If you screens are simple, avoid it altogether.
- Learn Objective-C 2.0 idioms and use them consistently.
- Learn good memory management techniques and plan your app around memory intensive resources.