@42cc Forked django-lean

May 24, 2012 – 17:32

… at https://github.com/lyapun/django-lean-42cc

First of all, we’ve removed broken migrations.

We plan to use it extensively, so expect more changes.

42 Coffee Cups idea incubation roadmap

January 4, 2012 – 01:45

Every developer has tons of ideas that shall be next Google/Facebook/Zynga. Or at least, we think we have.

42 Coffee Cups is a software development company, so it would be foolish not to have a roadmap for incubation of such ideas.

Tonight I did another excercise in ordering Product-Market Fit, Problem-Solution fit, value hypothesis, growth hypothesis, business model canvas and other tools of Lean Startup toolkit in my head..

The idea that hit me was that we, at least in my region (Ukraine), always often miss a key step – Message-Chanel-Problem fit. Simply put – can I get my Message through selected communication Channel to any meaningful amount of people that might have Problem I’d like to solve..

This makes incubation roadmap look this way:

  1. Reasonable business model canvas
  2. Message-Chanel-Problem fit
  3. Problem-Solution fit
  4. Product-Market fit (value hypothesis confirmation)
  5. Financial model fit (growth hypothesis confirmation)

Feel free to throw a stone.

When to monetize?

November 17, 2011 – 15:06

During BlackBox Connect program, I’ve enjoyed the lecture of Byrne Reese of ToyTalk – How to Build a Great Product

I encourage you to take a look on the presentation on the slideshare:

One thing that particlarly hit me, was his idea that you should not attempt to monetize your service too early. This contradicted alot of my experience and what Customer Development and Lean Startup practice. Or, at least it seemed to do…

I’ve followed up to get more information and here’s his response:

Just to be clear, I don’t think that creating a premium plan is a bad idea, far from it. In fact, it is a perfectly logical model for some products. I think what you have to weigh is how important are the network benefits of your service. In other words, if the value of the service you provide scales in accordance with the number of people using the service, than asking people to pay might slow adoption, and thus limit the service’s value to early adopters.

If you are thinking of a freemium model, where anyone can use the service for free, and then elect to upgrade, than you should treat that as a totally different scenario.
My question to you would then be, what would differentiate a premium and a non-premium plan? What would paying get you? And if you know the answer to that question, then have you tested those ideas in the market yet to see if they are the right levers for up-selling customers on your service? When we talk about testing our hypotheses, this is what we are talking about. Don’t take it for granted that offering someone 1MB vs 10GB of storage space (for example) is what will compel people to pay for your service. Test it first.
One last word… two things you should consider: a) the features of premium, and b) the price of premium. Chances are these might change over time as you learn. So the trick in my opinion is how you position the premium product. For example, while it is valuable to have a paid version of the product available to customers in order to test how willing people are to pay, having such a product available can potentially create challenges for yourself later on, especially for example if you learn that you initially charged too much, or too little. Customers don’t like it when they learned someone else paid less then they did, or when you suddenly increase their monthly fee. Never take lightly the responsibility of dealing with people’s money. In the end what is important is not the price, but how you manage your customer’s expectation with regards to price.

That makes sense really.

Lesson learned: If your service has strong network effects – defer monetization. If not – consider what people perceive as a value of your service.

I encourage to follow Byrne at @byrnereese and read his blog at http://www.majordojo.com

Get rid of manual quality control

March 29, 2010 – 17:55

After reading recent Eric Ries piece in HBR, one thought was bouncing my head…

What are such rules for software development service companies, like mine 42 Coffee Cups, that does Python, Django web development?

Some things in Eric’s article resonate quite closely to me. For example, when a stupid bug creeps into the code we deliver to our customers for review, at least half of time I’m being asked – aren’t your QC people watching it?

I had difficult time answering this, because my gut feeling from 20yr of software development is that dedicated QC slows things down while hardly compensating with increased quality.

You know, if debugging is the process that removes bugs from a code, then programming must be an activity that implants the bugs there.. :)

In my reality, when this bug is drilled down to find the cause, it’s almost always something like

  • the particular piece of code wasn’t covered by unit tests
  • whoever integrated the code didn’t run the tests before shipping (yup, it happens occasionally – we’re all humans)
  • whoever integrated, hasn’t bothered to eye-check the result and html, being clean, gives awful UI experience
  • there was a deadly misunderstanding between customer and developer

Each and every item on the list could be fixed by either

  • automating more work
  • adding more automated checks, like running CI tests on every checkin or measuring the test coverage
  • delivering to customer faster and ask for his feedback

During early stages or if done wrong, this gives awful results – a customer has to spend a day just listing defects :(

Yet…

Isn’t a reliably shorter round-trip time (time from a feature request submitted and the code shipped to you) and solid process of “making errors only once” much better than have to wait an additional day (on average) for QC review, pay for its costs and still have errors shipped sometimes?

42 Coffee Cups still have a long way to walk to achieve the ideal sanity, yet I believe this is a way to go

What do you think?

How to convince boss to use TDD

March 25, 2010 – 11:25

Some time ago I was contacted to find arguments to introduce Test Driven Development into a company process. Or, at least, start doing some automated tests.

Quick chat revealed that, while doing iterative development and having dedicated QC team, during the 2 year age of the product, the pace of development is slowing down and with each next release it becomes more difficult to get it out of the door. Some defects even creep into production.

If that would be a service company, like ours 42 Coffee Cups, I’d suggest to do a small pilot project using TDD and compare metrics. Product company, unless they want to run a controlled experiment (or a side project) are left only with comparison to foreign projects.

As we’ve dug deeper, it became obvious that team lead, who contacted me, made no efforts to baseline team performance.

They do use JIRA, do weekly iterations, but yet, it’s used just as a sophisticated todo list.

Since team lead guess was that pace is directly affected by bug count and this is related to the codebase size, the obvious metric would be the number of defects found (weekly) compared to SLOC count. Hypothesis to be proved: the larger amount of code gets – more defects per KSLOC pop up. While it is obvious to the lead, it is far from obvious for budget holders.

Their arguments:

  • we have already large codebase, no chance to cover it with tests in realistic timeframe and budget
  • tests are hard to support
  • the effect of tests is not obvious
  • tests double development time

From that point on, it was obvious how to advocate for automated tests:

  • we may start with tests for just new code and bugfixes
  • that’s not true (and provide proof references)
  • … and here is the effect of tests absence! – and show the graph (defect per KSLOC) vs KSLOC total
  • tests do not double the development time and their absence cost real money – and again show the graph that shows defect creep

Final suggestion was to introduce test in guerrilla fashion – just start writing unit tests for defect fixes.

Automated django deployment

March 19, 2010 – 17:02

We at 42 Coffee Cups do alot of django projects. Additionally I care about developer staffing, which means one more test project per candidate.

That totals to about 20 commercial and internal projects any time and 10-15 test projects.

That’s not a trivial amount and, given we deploy projects onto testing/staging server after each feature integration, the time spent in deployment is rather big. Last informal survey showed that PM doing the integration work, spends 10% to 50% of his time reviewing and deploying updates!

Of course, we automated it to the point of executing make deploy but I’ve always wondered of other options. Especially given the virtues of Continuous Deployment we would love to enjoy on every project.

Right now under the hood of make deploy we have scripts that

  • tag a version in a git repository
  • create a version archive
  • upload it to server
  • unpack and play with symlinks to mark this version current

What’s missing is:

  • automated db migration
  • automated server restart
  • fallback if something goes wrong
  • automated creation and management of versioned environment

So I’ve researched the area a bit and come down to the following DIY solutions:

Am I missing something, trendy or not?

Perhaps there are commercial (SaaS?) solutions available that I’ve not been able to dig out?

What’s your experience in this regard?

Epiphany – Customer Discovery steps

February 2, 2010 – 02:44

After having bunch of discussions at #sctest, #pykyiv and privately, I decided to post a distilled version of Customer Discovery steps by The Four Steps to the Epiphany by Steven Blank

Here it is. Just execute it:

Customer Discovery

State Hypotheses

  • Product Hypothesis
    • features
    • benefits
    • intellectual property
    • dependency analysis
    • product delivery schedule
    • total cost of ownership/adoption
  • Customer & Problem Hypothesis
    • types of customers
    • customer problems
    • a day in the life of your customers
    • organizational map and customer influence map
    • ROI justification
    • minimum feature set
  • Distribution & Pricing Hypothesis
  • Demand Creation Hypothesis
    • creating customer demand
    • influencers
  • Market Type Hypothesis
    • market type
    • market map (p. 56)
  • Competitive Hypothesis

Test “Problem” Hypothesis

  • Friendly First Contacts
    • list of 50 potential customers
    • get a referral
    • create a reference story
    • get 5-10 meetings
  • “Problem” Presentation
  • Customer Understanding
  • Market Knowledge

Test “Product” Hypothesis

  • First Reality Check
  • “Product” Presentation
  • Yet More Customer Visits
  • Second Reality Check
  • 1st Advisory Board

Verify

  • Verify the Problem
  • Verify the Product
  • Verify the Business Model
  • Iterate or Exit

Computational chemistry in Python – action plan

November 20, 2009 – 00:37

My favourite scripting language is Python and there are quite few interesting projects done:

  • PyMol – molecular visualization system on an open source foundation
  • MMTK is an Open Source program library for molecular simulation applications.
  • PyQuante is an open-source suite of programs for developing quantum chemistry methods.
  • cclib is an open source library, written in Python, for parsing and interpreting the results of computational chemistry packages.

PyQuante and MMTK are most suitable for the start of method framework.

Action plan:

  1. Bootstrap PyQuante. It works out of the box, but some of its tests do not pass – have to check this out.
  2. Do the same for MMTK.
  3. Implement at least some gradients and force constants functionality for PyQuante and make sure geometry of simple molecules optimize to something reasonable
  4. Dissect popular and not-so-popular QCh software packages to work within the framework above.
  5. Compile a basis set and a control set of molecules to control and optimize precision of different modeling methods
  6. Sum up, review and work out a new plan.

Time for computational chemistry

November 20, 2009 – 00:05

Lately there wasn’t much things to write about.

When you scale your business, there are plenty of lessons and observations. But theren’t that much you’d really like to tell the world about, and even less you can because of various NDA and confidentiality issues.

Nevertheless, looks finally (knock-knock-knock) 42 Coffee Cups can run successfuly without my 24×7 attention and I can devote my time to other interests that waited for too long.

Once of them is computational chemistry.

I’ve done some scientific work using computational chemistry modeling in 1990-1997 and still believe that it is the necessary element of nearly every future technology in health, energy, chemistry, IT, food, etc for the next 50+ yrs.

So what I see needs to be done for the Tool to appear…

  1. Robust modeling methods to be used of various scales
    • ab initio methods with high precision for small systems,
    • semiempirics for systems around 1K atoms and more,
    • MM approximations for systems with 10K-100K+ atoms
    • continuous body approximations for larger systems
  2. Interfaces between methods listed above, so I can start model on macro level and drill down in most interesting features to discover what happens on atomic level
  3. Strong feedback from experimental methods:
    • compute what could be measured, preferably immediately
    • measure what could be computed
  4. Visualization, analytics and search systems to harvest the data, both computed and experimental

I still do not know how this should be enveloped for business purposes – I’m too far in software and away from product development cycles in large companies. Guess this would sort out on go.

The cornerstone is the modeling method and scripting glue, so that what I’d start with.

Changed comments engine

September 3, 2009 – 11:11

Tired fighting comment spam and blocking useful discussions. Thus switched from native WP comments to disqus.

The only drawback I see is that disqus failed to import old comments properly.