TDD or Not TDD? That is the question!

What actually is TDD (Test Driven Development) ? Is TDD Dead?

Do you associate this term for when Tests actually Drive Development,  or use the label TDD for the practice of ensuring code coverage by having units tests? TDD can be taken to mean different things than the original meaning, and there are some risks from that shift in meaning.

I recently was searching online for discussion on TDD, and was surprised to find many pages describing TDD as simply ensuring unit tests are in place, but then other pages using TDD to refer to when Test actually Drive Development.  This difference in definition result in considerable confusion.

This page looks at what people is accepted as best practice today, how that fits with the original meaning of TDD, and the dangers and problems that do, and have already, resulted from a shift in meaning of TDD, what is dead and what is not dead.



Unit Test

It is generally assumed that a reader of this page will know what a ‘unit test’ is, but for clarity, a unit test a is program function that sets up specific inputs and then calls a target software ‘unit’  in order to verify the output of the target software unit is as expected, when given those specific inputs.  A software unit could be a function, a class or a module or even an overall software package.

Unit Tests

‘Unit Tests’, plural, or perhaps even clearer (but longer) a ‘unit test suite’ denotes a set of unit tests that should contain sufficient individual tests to infer that a software ‘unit’ will perform as expected, for each possible combination of inputs that the software unit under test could be expected to encounter in normal use.

TDD (Test Driven Development)

There is no universal agreed meaning of TDD.  There is the original meaning by Kent Beck, and some say even Kent has changed ideas as we all do, but the original meaning is the only one in a book, so on this page I will tend use that original meaning, except where I specifically discuss how people take TDD to mean something different.

From the original meaning, TDD is using tests to drive development. Such tests are specifically created not to form a test suite, but to enable software design and development. Some tests created during Test Driven Development are useful for a test suite, some may become redundant once software has been developed, and the TDD process does not automatically result in a complete set of Unit Tests.

Assertion Test.

This is a term introduced here, and can help reading this page if nothing else.  Unit tests can have one or more assertions. These assertions should together make a cohesive Unit test and that is discussed on another page. In the following examples, Uncle Bob sometimes says he is adding a new unit test, when in fact he then adds a new assertion to an existing unit test.  How many assertions does it take to make a unit test? Ideally one, but in real world it may take more.  When this page refers to an assertion test, it is a an individual (assertion) component of a unit test, and it could be confusing to describe that as a unit test.

Common to both TDD tests & Unit Tests (Test Suites)

Tests: The Only Real Specification

What does a program actually do? It passes the tests.

Any other specification is what someone believes the program should do, not what the program actually does.

A program is measured by its tests, and the result of those tests are the only real specifications.  Confusingly, sometimes design goals are described as specifications.

If you consider the specification of a camera, or car. Almost all specifications are established by measuring the values that are specifications, eg. engine power in horse power or kilowatts.  Certainly, the measured value may match the value that was the design goal, but for example if the car had a design goal of engine power 110kw but actually is measured to produce 105kw, it is only the measured value, not the design goal, which can be quoted as the product specification.  If the design goal was quoted as a specification, a customer would feel mislead.

A program is measured by its tests, and the result of those tests is the real specification.

Easily Repeatable Automated Tests Are Best.

Some code is difficult to test automatically. How do you test a function with a program that prints for example?  For some code it is simply far easier to run the program a see what prints.  In almost all cases, a system redesign to allow an automated Unit Test is the only satisfactory solution.  Unit tests can even be presented as a system specification.

A Failing Test Before Any Production Code.

No code should ever be written without first predetermining what the code should do.  This simply means do not start a task without first deciding what constitutes completing that task. For unit tests, add the unit test before the code is in place (if the production code already exists, still run the test before including the code in the system). For TDD as originally proposed, the test should be added before the solution has been determined.

TDD vs Unit Tests

A TDD Example with ‘Uncle Bob’

The following video of a talk by Uncle Bob is very useful, but quite long, so the main points will be discussed here without needing to watch entire video. Consider now the  video from 24m05s through to 42m:00s.

A total of 10 assertion tests are created.  The first 9 assertion tests are best described as TDD tests, with the 10th test the only actual unit test assertion.  This is because as the story unfolds, as told in the video, assertion tests 1 though 9 are all created without first creating the algorithm.  There is no algorithm other than what emerges as a result of incrementally adjusting code to pass tests. These tests drive development of a solution to the requirements of each test.  Test 10  (line 18) fits the definition of a conventional unit test.  The algorithm code already exists and works before this last test is written, and this test never exists as a failing test.

In fact it could be argued that all of the first 9 assertions are no longer required once test #10 is added.  It could be argued that at least the first test helps at least with documentation.  Perhaps even the first and second test add to explanation of the code, but clearly having an assertion test for every value from 1 through 9 is somewhat redundant.

On the other extreme, test cases such as factoring 0 (zero), or negative numbers, are not considered.  Sufficient tests to drive the development does not automatically ensure a full set of tests for all case, and can result in some tests not really required once the development is complete.

Unit Test Without TDD Example

TDD or not, there is a important rule that the test should be in place before the code to be tested is in place, which enable verification that can fail,  but that requirement does make the test drive the solution.  In fact, if the solution is obvious, the solution will drive the test.

Clearly, at least by the time of example video,  Uncle Bob actually knew in advance how to code to solution to prime factors. If you are Uncle Bob and already know how to code the solution, why not move directly to test #10?  The advantage of using tests to drive development, is that you can built up to the solution by adding new tests cases.  while having certainty that previous functionality still works.  A solution can be developed step by step, with the increasing set of tests providing certainly every previous step is not being broken.  But what is the point of those steps if you already know the complete solution?   In that case, why not just  create a tests that validate the overall solution.

If you have an algorithm at the outset, then you could move directly to test number 10  factorsOf(2x2x3x3x5x7x11x11x13) and bypass all the simplistic tests 1 through 9, that test cases so simple that if any of those simple cases failed, test 10 would fail anyway.

Benefits and Limitations of TDD.


The promise of TDD is that the problem can be reduced to the simplest solution that passes the required tests, and allowing a simple solution.  When a complete solution seems challenging, instead of being locked out by the design challenge, development can commence immediately and build the solution piece by piece.  In the Uncle Bob example, a solution to factorsOf()arises from the tests without any formal design process.  In the late 90s, when Kent Beck and others first developed TDD this seemed like magic.  Not only did solutions arise without a formal design, process, they say that elegant solutions could arise as from testing. It seemed all solutions could be provided this way, something which most proponents (including Uncle Bob as discussed below)  have since come to realise is not true.  Design driven from tests can solve problems not solved otherwise, but it simply is not an optimum solution, or even a solution, for every problem


camel-is-a-horse-with-drop-shadowThere is a famous quotation  ‘a camel is a horse designed by a committee’. The implication being when design tasks are split, an elegant overall design can be missed.  Consider the factorisation function called with 101:   factorsOf(101)

The main loop will test if every number from 11 through 100 is a factor of 101, when once 11 (where 11×11 > 101) is reached, it is already clear the number is prime.  No number between 11 and 100 need be tested.  Perhaps development driven by tests would never discover this inefficiency?

Balancing Benefits and Limitations.

A solution arrived at through tests will not always be better than a solution planned by studying the overall problem.  The best approach is to consider both methods and compare solutions.  Driving to a solution through tests can breakthrough when no overall solution is clear, but in the end very few software projects are as simple overall as the factorsOf example.  Most often it is only parts of the solution that will have an immediate clear solution.

Solutions where possible should start with an architecture, but as code is built and tested the results allow for redefining the architecture.

In some ways, the only difference between may be immediately apparent solution and the solution driven by steps is the size of the steps a problem.  The factorsOf() project could actually be tacked as a single step, with a single test to be passed.  But if the solution is not apparent, then break it into steps and incrementally add tests.

Most software projects are more significant than ‘factorsOf” and are too large to be developed in one step before testing.  They should be broken into steps, but should those steps be broken into smaller steps?

The balance between driving to a solution with staged tests and simply testing for the end result comes down to choosing the right sized steps to tackle as a single step.

The full original TDD has its place, but a more balance development process should be taken overall.

The Three Rules of ‘TDD’?

Newton created three laws of motion.  There are three laws of thermodynamics.  Hey, even Isaac Asimov got to write three laws, so why not Uncle Bob?  Note there questions on to what definition of TDD these three rules apply. But in the case of both thermodynamics and Isaac Asimov, later review resulted in a more fundamental ‘zeroth’ law, so perhaps some review of Uncle Bobs laws is also acceptable?  Uncle Bob compares his laws to procedures that surgeons treats a ‘law’.  Although failure to follow the pre-surgery procedures suggest a surgeon is unprofessional, it should also be considered the following the procedures does not ensure a surgeon is a good surgeon. Following the laws for TDD alone will not ensure code is quality TDD code.

1. No production code without a failing test.

Recall that a test is a tangible specification, and at least at one level, this law should seem axiomatic. It could be translated as ‘have some specification of what you are going to code before you code, and you should not bother coding if the specification is already met’.

For example, if you set out to write a program that prints the national flag. Your test might be ‘when i run it, what it prints should look like the national flag’.  The test is very subjective, and could be considered an ad-hoc test, and it is very hard to automate, but it is a test.  There should always be a test before you write any code.

It is very important that the test is a unit test. However, in the rare cases a unit test is not practical, having a test that is as concrete as possible is still essential. The clear the specification..  A project can be started without a concrete overall specification, but at the very least each stage should be specified before that stage is commenced.  The specification, and hence the test, can still have flexibility.  But how flexible and deciding what test(s) to  apply is critical.

I suggest this law is essential to any software development. No production code without a failing test, and unless there is a very sound reason why it is impractical, that test should be a unit test.

2. Apply tests one at a time, in the smallest increments possible

I have changed this ‘law’, and in fact still do not regard it as a clear ‘law’, but more of a goal.   The goal is hard to word with the precision required for a ‘law’, and it is more difficult to determine when it is being broken or followed. The original wording from Uncle Bob: You are not allowed to write any more of a unit test than is sufficient to fail, and compilation tests are failures.  has two problems.  1) it is open to reading as making mandatory the  very part of the original Kent Beck definition of TDD  Uncle Bob is on the record as saying is ‘horseshit’ (more on this later on this page),  secondly the wording is open to different interpretation.

The original Kent Beck  definition of TDD would require strict adherence to tests driving all development- including design. The code to meet test number ‘n’  for a system (test=specification) must be in place prior to writing test number ‘n+1′  ( the next specification).   Strictly adhering to this principle would mean if someone says to you, “I want a new program, and it must do these three things…” you would stop them and say… “No, wait, I can only record one specification detail at a time!  Wait until the code is in place for the first thing, before considering any further functionality!”.   More normal convention would suggest that if it is planned that there are three things the program should do, surely what those three things are can be written down.  If you have good tools, the bet way to record those three ‘things’ or specifications is to record what they are as tests.  They those tests can still be activated one at a time, and that is what should be done.  Appropriate TDD is to activate tests on the code incrementally one at a time, but actually recording them ahead of time should not be banned.  It is sill possible to amend the specifications/tests as the system develops, without banning writing down suggested specifications/tests ahead of time in any form… either as code or as any other language form.

The second problem of the ‘law’ is that words are open to interpretation. What exactly is sufficient to fail?  Perhaps ‘sufficient to be used as a failing test’ makes more sense?  And what does ‘write’ mean?  If a future test occurs to you ahead of time, you should never write it down? In practice, there should be some way of recording that tests are not to be applied yet, even if it means commenting them out or preferably marking them as ‘future’ or some agreed notation.  With the factorsOf() example as explained and coded in the video, one assert at a time makes sense.  But if you know the solution, in which case there are too many asserts in the example, then adding all asserts you do need before adding code that should pass all asserts immediately simply makes sense. In fact, in the example, the last assert could be interpreted as several tests in one…..but it is still practical.

3. One there is code passes tests, do not progress before considering tests for other condition for the code just added.

Ok, this is not what Uncle Bob said in his laws (although it is followed in his example).  It could be claimed that this is about sound unit tests rather than under the heading TDD, but different people have different interpretations of terminology.

Uncle Bobs third law is stated as You are not allowed to write any more production code than is necessary to pass the one failing test.  This to me is simply restating the first law. Don’t write production code without a failing test.  Once the test is passes, then you no longer have a failing test.  This rule describes what you should not do once production code passes tests ….  but rather than a reminder of law 1, perhaps consider what you should do once production code passes tests.  What you should do is think of other tests that are need for that code.  In the factorsOf() example, Uncle Bob adds his final test, exactly as described here.  What other tests are needed?  In this case the factorsOf(2x2x3x3x5…)  test is added.  This test never fails, shows Uncle Bob actually follows this amended third law.

The Confusion: Is TDD Dead?

At least three interpretations of the term ‘TDD’ are in use, including :

  1. The Original Kent Beck Full Concept of Using Tests to Drive Development (including design)
  2. Never Code without a failing test
  3. Any Use of Unit Tests is TDD

With such variation of meaning confusion sets in.  One expert, who is using definition number 2, declares “any development not using TDD is unprofessional”.  Then another expert, hearing the statement but themselves using definition #1 responds “TDD has some uses, but more elegant designs can result from not using TDD”.  Then a third, non expert, hears that second statement, but connects the statement with definition #3 and declares “experts declare that Unit Tests block the writing of quality software”.

You can see this play out over and over on the internet. You will people claiming TDD is essential and others claiming TDD is dead….. without the posters  ever checking what exactly either those they are debating with our their sources actually specifically mean by TDD.

Here is Uncle Bob declaring that a key original idea of TDD is ‘horseshit’ .  Promoting a new definition to TDD has the problem as pointed out Jim Coplien, is that people will find the original definition from the books and talks defining the topic, and believe that original idea is what they are being instructed to do.

Is TDD dead?

One of the original ideas within the original definition of TDD, that building all system architecture from tests will always product the best solution,  is indeed dead.  Nothing else about the original TDD idea is dead.  Unit tests are not dead, and build tests before coding is certainly not dead.  Requiring all design to originate from tests  is the only part of TDD that is dead.  Building architecture from tests is also NOT dead,  but it now recognised that it will often not build the best architecture and is just one alternative, no longer a mandate.   It has since be realised that traditional system design still makes sense, and is still needed.  TDD is usually now redefined not included that one dead idea, and as such TDD is not dead, just the one idea that went too far.  In fact TDD is redefined to mean many different things. Redefining TDD as something new, like TDD=Unit tests, and then declaring this redefined TDD is dead is just confusing.

I have even seen more than one debate, as with the  example already quoted from, where the against-TDD speaker effectively concedes that TDD as defined by the pro-TDD  speaker does make sense, and it that one specific part of the original definition that is dangerous.   Arguments for and against TDD tend to be arise from different interpretation of  just what TDD actually means, and what definition different people are using.


Different definitions of what TDD means are in circulation. Before considering any point of view on TDD, it is advisable to check how the source of the opinion is interpreting the term TDD.  The originators of TDD did get ‘carried’ away with the capabilities which are very useful, but those original ideas should not be into laws.

Code should only be written with a test first identified, and unless there is a very good reason otherwise, that test should be a unit test.

Driving Development by Tests is useful, especially for specific detailed problems, but is not a practice that provides all the answers and may not answer the big picture of what is required.

In all cases, productions code should only be written with a test first identified, and unless there is a good reason why not, that test should be a unit test.

Neither full TDD, nor writing code only to failing tests,  will automatically result in a full Unit Test suite.


Evaluation of Kotlin Native for mobile: January 2018

We have been looking at Kotlin Native (KN) as a viable solution for mobile development, and as a competitive solution to those like Xamarin – or to completely separate developments for iOS and Android. To be a viable mobile solution in general requires Kotlin Native(KN)  to be workable for iOS as first announced

On my inspection, I believe that KN makes sense, and very good sense in the longer term for mobile development, but Kotlin Native is still far from production ready for the following reasons:

  1. This is the most important reason, all the platform specific API hasn’t been published as maven artifacts, which mean you can’t add them as project dependencies in gradle which leads to many other problems like:
    1. syntax highlight
    2. autosuggestions
    3. your code related to KN is fragile as they said:
      • `warning: IMPORTANT: the library format is unstable now. It can change with any new git commit without warning!`
  2. No documentation for API. And without any support from IDE. It will greatly slow down the job.
  3. Multiplatform project needs to support KN in order to get benefits in terms of architecture. You can still just use the old school way (by declaring `interface`). But this should be ready in 0.6 (My feeling according to slack, still not sure.)
  4. In response to a question on the Kotlin Native slack channel “Any ETA on beta/1.0 version?Nikolay Igotti replied( 25/December/2017) :Not yet, however, being v0.x is mostly an internal thing, in general both compiler and runtime are pretty well tested and known to work on rather complex codebases, such as Video Player:

  5. CLion: The benefit of using CLion seems more for the KN dev team, and for projects integrating with the C family, which is not the case for iOS projects.  When the developers deal with the cross-platform setup, they need to dig into the LLVM layer in order to build the bridge. CLion is the only IDE in JB family which supports Kotlin Native at this time, which is problematic for projects looking to go multiplatform with JVM or JS, and for iOS projects which combine with Swift and Xcode.  There is no announced plan for supporting the other JB IDEs. Further, from the project leader’s talk in slack in late January 2018 on support for IDEA: `this is not yet completely decided, but CLion is currently the only option`. And you know, CLion doesn’t support gradle, and they use gradle to build…..    The other possibility is Koltin support in AppCode, which there are suggestions may be coming and could be the best solution.
    • So we have some difficult situation here, which is:
      • CLion doesn’t support gradle. And the issue is there since 2014.
      • The multiplatform project hasn’t support KN yet. But this one is easy and difficult.
        • It’s easy because once the maven dependencies is there, the support will be nearly there. Or we could build the whole thing by `gradle` ourseleves.
        • It’s hard because as I said… The KN libs hasn’t been published as maven dependencies yet.
      • And from the talk of slack, it seems that the team holds the releasing mainly because the KN lib has a completely different format, even for the file extention, it’s called `.klib` now. So, uploading it to maven or JCenter seems not ideal. I assume JB might end up with building a new repository just KN libs.

And when there are some problems on both IDEA and CLion, a potential answer from JetBrains might be a new IDE just for Kotlin native. The following video maybe a provenance for this: at 6:20 Andrey Breslav said (in Russian) they started development of new commercial product for cross-mobile development, Android and iOS.

But it seems that Appcode with KN support should land first according to the slack chat.

The team leader has said in the slack channel that they will ask for dogfooding once it’s ready. 🙂

If app developers wish to only build using the kotlin-std-lib, and inject the platform-specific API at runtime, it’s doable. But then your codebase will be a mess because you need to build the bridge by yourself  as the kotlin type has been converted to some special interface type, which you need to implement in the swift side as well…. all to cope with an interim solution which will be deprecated in a future version release…

So, 3 things are crucial for using KN in production:

  1. Decent IDE support such that we could inspect the APIs signature(No matter CLion or IDEA or AppCode, this is essential)
  2. Multiplatform project support KN
  3. Able to create an KN project without depending on the KN repo…. Which means they need to publish their platform lib in order to enable us to add them as the project dependencies in gradle. Otherwise, a single build on KN repo takes 2 hours…..on my 2017 i7 15″ retina MacBook pro.

All of the 3 are all needed for writing KN related app.

But I will keep an eye on KN, because I think as I dig into more, I think KN starts to make more sense.

  1. You can really share your logic. The most awesome part is that you can invoke platform-specific API from kotlin side which means you don’t need to deal with the communication between languages. Which means you can really embed heaps of logic in the KN base.
  2. The multiplatform project is a really great way to share code across platforms. It just makes sense. You abstract your code in `common`, `js`, `jvm`, `ios` or `android`. And the gradle will grab the related pieces to compile according to the platform you wanna build against.
  3. This sort of embrace-the-platform-differences-rather-than-write-everything-once-and-run-anywhere concept has granted KN a very promising future compare to Xamarin’s replacing-them-all.

Building: No need with python?

Yes, building is needed with python.  but in one special case it can be hidden in the background. This page provides background on building, both with python and kotlin.

It can seem that python programs do not even need building, but the reality is some form of build needed with any program.  The good part of building in python, is building as you develop is so simple you don’t really notice.  The negative is if you later want to package up what you have developed, you may be confronted with one of a wide range of different builds options some of which can be quite complex.

Building Introduction Topics:

  • What is Building?
  • The Two Ways to Build: Environment vs Package Build
    1. Environment Build
    2. Package Build
  • Building in Python
  • Building in Kotlin
  • Conclusion: The different focus of Python and Kotlin

What is Building?

A Definition: Building is the process of putting in place all the components needed for a program, and proving the code with location information for each component.

The components are the resources needed by the program, such as program code for library functions, and images that may be used by the program.

Developing in python, it can seem like there is no such step as ‘building’.  Just type ‘python ‘ and python will do all that is required.  The reality is that this is an environment build approach, and during development any new libraries or other resources are added to the environment. So the steps to prepare the environment happen over time and can be forgotten. It is only when there is a need to run the program on computer other than the development computer, and all the environment install steps need to be repeated, that the environment build gets noticed.

Note also, that reliance on this approach limits the use of python.

In fact while this is true for the simplest way of running python, it is not always true.  That is because the simplest way of running python relies heavily on an ‘environment build’.

The Two Ways to Build

Environment Build vs Package Build

An environment build is where all the resources needed for the program are installed in the environment prior to running the program, enabling different programs to share the same resources.

By contrast, package build is where the resources for the program are packaged inside the program file, ensuring each program is self contained and independent.

1. Environment Build.

For an environment build, it the environment around the program that is built. An environment build means running the program on a new computer will require an ‘install’. The install can either be a series of steps to be followed manually, or there can be an install program to perform those steps.

The advantages of environment build are:

  • no need to re-install resources that have been previously installed in the environment another program(s)
  • during development, each new resource can be installed as required and independently, resulting in only one build for the complete development cycle, spread out over the entire development cycle

Disadvantages of environment build:

  • distribution of the application requires either a manual install process, or building of an installer which can be an additional development step
  • different applications may want different versions of resources to be present in the environment which may cause complex conflicts that are difficult to identify and/or resolve

 2. Package Build

A package build is where the components are combined in a single runnable ‘package’ file containing the components required by the program.

Advantages of package build:

  • distribution of the application can be as simple as just one file
  • each application contains its own components so no version conflicts can occur

java.jar files use a pure package build approach, which does require a build before every run, but means a fully portable self contained program is readily available without extra steps.

Building in Python

Building in python is very different for different uses of python.

Developer is the end user: One of the most common situations with python is the SEAAS developer, who is both the developer of the program and the end user of program. The resources needed by the program are installed as the program is built so there never seems to be a build at all.

Building for files mostly becomes installing python, then ensuring all the imports have all the packages those imports require installed. If the imports all work, the environment for the program has been built.  Once the environment is built, the program can easily be run without further steps.  The disadvantage of the this ‘build the environment’ approach is that for each new version of python the environment to satisfy the all python programs must be rebuilt, or alternatively a virtual environment approach must be adopted for development, and deployment can be problematic.

Web Server: Even if a web server is viewed by millions of people, the application only needs to be installed once per server, and it can be installed by the developer or other very skilled people.  Manual install of the environment for the python code, even if there are several steps, is entirely practical and is done by the developer.

Mobile Applications: Kivy is probably the best mobile framework for python, but like any other mobile app, Kivy apps, like all mobile apps, must be fully packaged.  Building a packaging is not part of standard python workflow, so the specifics of packaging kivy must be learnt and is quite is a complex and often fragile process.

Sharing with other friend/colleague Python Programmers (SEAAS): Simply send the source file, the other programmer probably already has python (probably the right version?) installed.

With simple python programs, there are no other ‘parts’, just python itself and just the one file.  As programs grow, there are imports, and then libraries to install to enable more imports. If the person installing knows pip install, they will quickly satisfy other requirements.  No build by the developer, manual environment build by the end user that is quite straightforward provided the end user can develop in python.

Sharing libraries and/or applications on PyPI: PyPI (or the “cheeseshop”) allows sharing open source python projects ready for simple installation into any computers python environment with a simple ‘pip install’.

Windows / MacOS applications: To distribute an application, to be installed and run by someone who need not know how to program in python, there are a variety of solutions. py2exe and pyinstaller and examples, and in each case the process is equivalent to a build that automates the install for regular users of the program.

Building in Kotlin

The bad new: There is always a build process.

The good news: There are build processes that work for all scenarios and are highly refined.

First, on that bad news.  Even the kotlin “hello world” program needs a build.  The ‘hello world’ program must call a print function, that has to be connected with the program code.  With python, the print function is in the standard library, which was installed in the python environment when python was installed so the developer needs no further build, unless that developer wants to send the program to a friend as a “helloexe” type file……. which would of course need a build.  However for “hello world” , having a “hello.exe” is not needed so you do not notice.

With kotlin, you are going to get the equivalent of “hello.exe”, so you there will be a build. The “hello” code must be built into a file together with the “print” function to be a complete program, and that requires a build.  All kotlin programs are made ready for installation on a computer, without first installing kotlin on any computer which will run the program.  Either as the “hello.exe” with kotlin native, “hello.jar” with the java version or “hello.js” to run in a browser.  Kotlin offers that choice (a level of choice not matched by python) but always requires a build.

Secondly, on the good news: The build is very simple. Because every developer is doing builds all the time, building is highly evolved.  Building can seem complex, because it is very flexible, and to learn all that can be done with build is still huge.  But to learn simple builds, equivalent to the examples for python given above is very easy.  The trap to avoid is the trap of trying to learn all that can be done with builds before getting started.

Conclusion: The different focus of kotlin and python.

Note: What applies here for python, also applies for many other ‘dynamic’ languages.

For someone writing programs for their own use, or for their colleagues who can also program in python, python provides a virtually ‘build free’ development cycle. Building can be virtually ignored.

For web servers, where millions can use the site build in python and installed just once, python also provides a close to a build free experience, and this time even for those who program as a profession.

For applications or mobile apps to be distributed for the mass market, and particularly apps/application by professionals to earn revenue, the build free experience is no longer available and building then can be complex and more difficult than with kotlin.

Python: With respect to building, python is at its best for programs written to be used by the developer and other programming colleagues, or for web servers or web services.

All kotlin programs require building. For “hello world” it can be so automatic you will not notice, but you will notice soon, even if only building programs for you own use.

However, the build systems available with kotlin are more consistent and more powerful. In fact, some notable large scale python projects like linkedin use kotlin ecosystem build systems. Despite the desirability of the kotlin approach, building in python is scattered that developers moving to kotlin can have their be the first ever exposure to build systems when using kotlin.  Note: in an ideal world, all python projects could use the same build system as other languages such as kotlin  (as is the case with linkedin).

There is a learning curve to building with kotlin, and that will be nice shallow curve if you keep it simple or a very steep curve if  try to learn the most advanced building possibilities at the start (when you don’t need them).

Kotlin: you can have one build system to build everything, including professional application, in place of the several different systems needed with python, and the power available is so compelling that advanced python users invest considerable resources to making the same system available with python.


Gradle with Intellij: Using build.gradle

When first opening a kotlin project that has a gradle configuration,

If you haven’t installed gradle, check the following tutorial to install it:

Open Intellij:

  • Open:
    • If Intellij is already running: From “File” menu, click “Open”
    • If Intellij is not already running: Choose “Import Project”
  • Select the build.gradle from the project you want to open
  • Click “Open as Project” file from the pop-up menu
  • In the following “Import Project from Gradle” dialog:
  • Select “Use local gradle distribution”
  • For “Gradle JVM”: Pick up your local JDK
  • If encounter a message says: “File already exists”, select “Yes”
  • In encounter “There is an existing project in …”, select “Delete Existing Project and Import”
  • Click “OK”

Now you should successfully import the gradle project.

When you encounter the error “Java home is different” when syncing gradle project.

If you encounter this error: “The specified Gradle distribution ‘blahblah’ does not exist. ”

It means the current version that is used for this repo is not aviablable any more.

  • If you select “Use gradle wrapper task”
  • Check the wrapper task in build.gradle, make sure it has an available version number
  • delete the gradle folder in the root project
  • re-sync gradle

When you see the following message:

  1.  Configure gradle in project – DO NOT DO THIS!!!
  2. Import gradle project – ONLY DO THIS ONE ONLY

The trap is that the first suggestion modifies the file, and in fact our trials, this will normally break the build.gradle file.  If you have selected this, the only solution is to then undo those modifications made by IntelliJ and try again.

The import gradle may require setting the gradle home directory.  Set this to the location where you installed gradle. On a Mac, this is typically /usr/local/opt/gradle/libexec.

Gradle window (view/tool windows/gradle or gradle from right sidebar)

Note if the import gradle option action does not appear, check if the gradle window is already available, in which case import gradle has already taken place.  If not, try closing and reopening the project after checking that the build.gradle file is in the project root.

Once the import gradle is complete (it takes a while),  open the gradle window (view/tool windows/gradle or gradle in the right sidebar) and expand tasks/build and activate the ‘build‘ option under tasks/build.

It might be useful to then select the gradle build in the run debug toolbar (as shown below):


by clicking the dropdown, and selecting an option with the green gradle icon (as shown to the left of Unnamed above) before using the build build/build project or the run or debug symbols from the bar shown above.  Do not use either of these options before running.

When will your project ‘grow up’ and require typesafe code?

originalThere is common belief that after an initial very agile development period,  “grown up projects should switch to a static typed language”.  This also raises the question, “are type-safe languages less suitable for early stage development?”  This page considers the evidence for these beliefs, plus considers if there is a benefit to starting dynamic and switching to static, what is the crossover point?

TL;DR? As usual, read the headings and choose only read beyond the heading when you choose.  But in summary, modern statically typed language, and specifically kotlin, are bringing forward the point where adoption is logical, to the point it can now be argued that any project that justifies its own repository, would benefit from starting out as a kotlin project. Continue reading “When will your project ‘grow up’ and require typesafe code?”