Youry's Blog

Youry's Blog

Archive for the ‘SW Projects’ Category

How To Get Hired — What CS Students Need to Know & A Future for Computing Education Research

leave a comment »

  1. A Future for Computing Education Research 2015-01-20 21.38.45ulltext

  2. from

I’ve hired dozens of C/C++ programmers (mostly at the entry level). To do that, I had to interview hundreds of candidates. Many of them were woefully poorly prepared for the interview. This page is my attempt to help budding software engineers get and pass programming interviews.


What Interviewers are Tired Of

A surprisingly large fraction of applicants, even those with masters’ degrees and PhDs in computer science, fail during interviews when asked to carry out basic programming tasks. For example, I’ve personally interviewed graduates who can’t answer “Write a loop that counts from 1 to 10” or “What’s the number after F in hexadecimal?” Less trivially, I’ve interviewed many candidates who can’t use recursion to solve a real problem. These are basic skills; anyone who lacks them probably hasn’t done much programming.

Speaking on behalf of software engineers who have to interview prospective new hires, I can safely say that we’re tired of talking to candidates who can’t program their way out of a paper bag. If you can successfully write a loop that goes from 1 to 10 in every language on your resume, can do simple arithmetic without a calculator, and can use recursion to solve a real problem, you’re already ahead of the pack!

What Interviewers Look For

As Joel Spolsky wrote in his excellent essay The Guerrilla Guide to Interviewing:

1. Employers look for people who are Smart and Get Things Done

How can employers tell who gets things done? They go on your past record. Hence:

2. Employers look for people who Have Already Done Things

Or, as I was told once when I failed an interview:

3. You can’t get a job doing something you haven’t done before

(I was interviewing at HAL Computers for a hardware design job, and they asked me to implement a four-bit adder. I’d designed lots of things using discrete logic, but I’d always let the CPU do the math, so I didn’t know offhand. Then they asked me how to simulate a digital circuit quickly. I’d been using Verilog, so I talked about event simulation. The interviewer reminded me about RTL simulation, and then gently said the line I quoted above. I’ll never forget it.)

Finally, you may even find that

4. Employers Google your name to see what you’ve said and done

What This Means For You

What the above boil down to is: if you want to get a job programming, you have to do some real programming on your own first, and you have to get a public reputation, however minor, as a programmer. Don’t wait for your school to teach you how to design and program; they might never get around to it. College courses in programming are fine, probably even necessary, but most programming courses don’t provide the kind of experience that real programming gives, and real employers look for real programming experience.

Malcolm Gladwell wrote in Outliers,

… Ten thousand hours of practice is required to achieve the level of mastery associated with being a world-class expert — in anything.

Seems about right to me. I don’t know how many hours it takes to achieve the level of mastery required to program well enough to do a good job, but I bet it’s something like 500. (I think I had been programming or doing digital logic in one way or another for about 500 hours before my Dad started giving me little programming jobs in high school. During my five years of college, I racked up something like several hundred hours programming for fun, several hundred more in CS/EE lab courses, and about two thousand on paid summer jobs.)

But How Can I Get Experience Without a Job?

If you’re in college, and your school offers programming lab courses where you work on something seriously difficult for an entire term, take those courses. Good examples of this kind of course are

Take several of this kind of course if you can; each big project you design and implement will be good experience.

Whether or not you’re in college, nothing is stopping you from contributing to an existing Open Source project. One good way to start is to add unit or regression tests; nearly all projects need them, but few projects have a good set of them, so your efforts will be greatly appreciated.

I suggest starting by adding a conformance test to the Wine project. That’s great because it gives you exposure to programming both in Linux and in Windows. Also, it’s something that can be done without a huge investment of time; roughly 40 work hours should be enough for you to come up to speed, write a simple test, post it, address the feedback from the Wine developers, and repeat the last two steps until your code is accepted.

One nice benefit of getting code into an Open Source project is that when prospective employers Google you, they’ll see your code, and they’ll see that it is in use by thousands of people, which is always good for your reputation.

Quick Reality Check

If you want a quick reality check as to whether you can Get Things Done, I recommend the practice rooms at If you can complete one of their tasks in C++ or Java within an hour, and your solution actually passes all the system tests, you can definitely program your way out of paper bag!

Here’s another good quick reality check, one closer to my heart.

Good luck!

Please let me know whether this essay was helpful. You can email me at dank at

Shameless Plug

I’m looking for a few good interns. If you live in Los Angeles, and you are looking for a C/C++ internship, please have a look at my internship page.


Written by youryblog

January 17, 2015 at 6:40 PM

Effective System Modeling

leave a comment »

Effective System Modeling (from

This post is a rough “transcript” (with some changes and creative freedom) of a session I gave in the Citi Innovation Lab, TLV about how to effectively model a system.

A Communication Breakdown?

Building complex software systems is not an easy task, for a lot of reasons. All kinds of solutions have been invented to tackle the different issues. We have higher level programming languages, DB tools, agile project management methodologies and quite a bit more. One could argue that these problems still exist, and no complete solution has been found so far. That may be true, but in this post, I’d like to discuss a different problem in this context: communicating our designs.

One problem that seems to be overlooked or not addressed well enough, is the issue of communicating our designs and system architecture. By ourselves, experienced engineers are (usually) quite capable of coming up with often elegant solutions to complex problems. But the realities and dynamics of a software development organization, especially a geographically distributed one, often require us to communicate and reason about systems developed by others.

We – software engineers – tend to focus on solving the technical issues or designing the systems we’re building. This often leads to forgetting that software development, especially in the enterprise, is often, if not always, a team effort. Communicating our designs is therefore critical to our success, but is often viewed as a negligible activity at best, if not a complete waste of time.

The agile development movement, in all its variants, has done some good to bring the issues of cooperation and communication into the limelight. Still, I often find that communication of technical details – structure and behavior of systems, is poorly done.

Why is that?

“Doing” Architecture

A common interpretation of agile development methods I often encounter tends to spill the baby with the water. I hear about people/teams refusing to do “big up-front design”. That in itself is actually a good thing in my opinion. The problem starts when this translates to no design at all, and this immediately translates into not wanting to spend time on documenting your architecture properly, or how it’s communicated.

But as anyone who’s been in this industry for more than a day knows – there’s no replacement for thinking about your design and your system, and agile doesn’t mean we shouldn’t design our system. So I claim that the problem isn’t really with designing per-se, but rather in the motivation and methodology we use for “doing” our architecture – how we go about designing the system and conveying our thoughts. Most of us acknowledge the importance of thinking about a system, but we do not invest the time in preserving that knowledge and discussion. Communicating a design or system architecture, especially in written form, is often viewed as superfluous, given the working code and its accompanying tests. From my experience this is often the case because the actual communication and documentation of a design are done ineffectively.

This was also strengthened after hearing Simon Brown talk about a similar subject, one which resonated with me. An architecture document/artifact should have “just enough” up front design to understand the system and create a shared vision. An architecture document should augment the code, not repeat it; it should describe what the code doesn’t already describe. In other words – don’t document the code, but rather look for the added value. A good architecture/design document adds value to the project team by articulating the vision on which all team members need to align on. Of course, this is less apparent in small teams than in large ones, especially teams that need to cooperate on a larger project.

As a side note I would like to suggest that besides creating a shared understanding and vision, an architecture document also helps in preserving the knowledge and ramping-up people onto the team. I believe that anyone who has tried learning a new system just by looking at its code will empathize with this.

Since I believe the motivation to actually design the system and solve the problem is definitely there, I’m left with the feeling that people often view the task of documenting it and communicating it as unnecessary “bureaucracy”.
We therefore need a way to communicate and document our system’s architecture effectively. A way that will allow us to transfer knowledge, over time and space (geographies), but still do it efficiently – both for the writer and readers.
It needs to be a way that captures the essence of the system, without drowning the reader in details, or burden the writer with work that will prove to be a waste of time. Looking at it from a system analysis point of view, then reading the document is quite possibly the more prominent use case, compared to writing it; i.e. the document is going to be read a lot more than written/modified.

When we come to the question of modeling a system, with the purpose of the end result being readable by humans, we need to balance the amount of formalism we apply to the model. A rigorous modeling technique will probably result in a more accurate model, but not necessarily an easily understandable one. Rigorous documents tend to be complete and accurate, but exhausting to read and follow; thereby beating the purpose we’re trying to achieve. At the other end of the scale are free text documents, often in English and sometimes with some scribbled diagrams, which explain the structure or behavior of system, often inconsistently. These are hard to follow for different reasons: inaccurate language, inconsistent terminology and/or ad-hoc (=unfamiliar) modeling technique used.

Providing an easy to follow system description, and doing so efficiently, requires us to balance these two ends. We need to have a “just enough” formalism that provides a common language. It needs to be intuitive to write and read, with enough freedom to provide any details needed to get a complete picture, but without burdening the writers and readers with unnecessary details.
In this post, I try to give an overview and pointers to a method I found useful in the past (not my invention), and that I believe answers the criteria mentioned above. It is definitely not the only way and may not suit everyone’s taste (e.g. Simon Brown suggests something similar but slightly different); but regardless of the method used, creating a shared vision, and putting it to writing is something useful, when done effectively.

System != Software

Before going into the technicalities of describing a system effectively, I believe we need to make the distinction between a system and its software.

For the purposes of our discussion, we’ll define software as a computer-understandable description of a dynamic system; i.e. one way to code the structure and behavior of a system in a way that’s understandable by computers.
A (dynamic) system on the other hand is what emerges from the execution of software.

To understand the distinction, an analogy might help: consider the task of understanding the issue of global warming (the system) vs. understanding the structure of a book about global warming (the software).

  • Understanding the book structure does not imply understanding global warming. Similarly, understanding the software structure doesn’t imply understanding the system.
  • The book can be written in different languages, but it’s still describing global warming. Similarly, software can be implemented using different languages and tools/technologies, but it doesn’t (shouldn’t) change the emergent behavior of the system.
  • Reading the content of the book implies understanding global warming. Similarly, the system is what emerges from execution of the software.

One point we need to keep in mind, and where this analogy breaks, is that understanding a book’s structure is considerably easier than understanding the software written for a given system.
So usually, when confronted with the need to document our system, we tend to focus on documenting the software, not the system. This leads to ineffective documentation/modeling (we’re documenting the wrong thing), eventually leading to frustration and missing knowledge.
This is further compounded by the fact that existing tools and frameworks for documentation of software (e.g. UML) tend to be complex and detailed, and with the tools emphasizing code generation, and not human communication; this is especially true for UML.

Modeling a System

When we model an existing system, or design a new one, we find several methods and tools that help us. A lot of these methods define all sorts of views of the system – describing different facets of its implementation. Most practitioners have surely met one or more different “types” of system views: logical, conceptual, deployment, implementation, high level, behavior, etc. These all provide some kind of information as to how the system is built, but there’s not a lot of clarity on the differences or roles of each such view. These are essentially different abstractions or facets of the given system being modeled. While any such abstraction can be justified in itself, it is the combination of these that produces an often unreadable end result.

So, as with any other type of technical document you write, the first rule of thumb is:

Rule of thumb #1: Tailor the content to the reader(s), and be explicit about it.

In other words – set expectations. Set the expectation early on – what you’re describing and what is the expected knowledge (and usually technical competency) of the reader.

Generally, in my experience, 3 main facets are the most important ones: the structure of the system – how it’s built, the behavior of the system – how the different component interact on given inputs/events, and the domain model used in the system. Each of these facets can be described in more or less detail, at different abstraction levels, and using different techniques, depending on the case. But these are usually the most important facets for a reader to understand the system and approach the code design itself, or reading the code.

Technical Architecture Modeling

One method I often find useful is that of Technical Architecture Modeling (TAM), itself a derivative of Fundamental Modeling Concepts (FMC). It is a formal method, but one which focuses on human comprehension. As such, it borrows from UML and FMC, to provide a level of formalism which seems to strike a good balance between readability and modeling efficiency. TAM uses a few diagram types, where the most useful are the component/block diagram used to depict a system’s structure or composition; the activity and sequence diagrams used to model a system/component’s behavior and the class diagram used to model a domain (value) model. In addition, other diagram types are also included, e.g. state charts and deployment
diagrams; but these are less useful in my experience. In addition, TAM also has some tool support in the form of Visio stencils that make it easier to integrate this into other documentation methods.

I briefly discuss how the most important facets of a system can be modeled with TAM, but the reader is encouraged to follow the links given above (or ask me) for further information and details.

Block Diagram: System Structure

A system’s structure, or composition, is described using a simple block diagram. At its simplest form, this diagram describes the different components that make up the system.
For example, describing a simple travel agency system, with a reservation and information system can look something like this (example taken from the FMC introduction):

Sample: Travel Agency System

This in itself already tells us some of the story: there’s a travel agency system, accessed by customers and other interested parties, with two subsystems: a reservation system and an information help desk system. The information is read and written to two separate data stores holding the customer data and reservations in one store, and the travel information (e.g. flight and hotel information) in the other. This data is fed into the system by external travel-related organizations (e.g. airlines, hotel chains), and reservations are forwarded to the same external systems.

This description is usually enough to provide at least a contextual high level information of the system. But the diagram above already tells us a bit more. It provides us some information about the access points to the data; about the different kinds of data flowing in the system, and what component is interacting with what other component (who knows who). Note that there is little to no technical information at this point.

The modeling language itself is pretty straightforward and simple as well: we have two main “entities”: actors and data stores.
Actors, designated by square rectangles, are any components that do something in the system (also humans). They are they active components of the system. Actors communicate with other actors through channels (lines with small circles on them), and the read/write from/to data stores (simple lines with arrow heads). Examples include services, functions and human operators of the system.
Data store, designated by round rectangles (/circles), are passive components. These are “places” where data is stored. Examples include database systems, files, and even memory arrays (or generally any data structure).

Armed with these definitions, we can already identify some useful patterns, and how to model them:

Read only access – actor A can only read from data store S:
Read only access


Write only access – actor A can only write to data store S:
Write only access


Read/Write access:
Read/Write access


Two actors communicating on a request/response channel have their own unique symbol:
In this case, actor ‘B’ requests something from actor ‘A’ (the arrow on the ‘R’ symbol points to  ‘A’), and ‘A’ answers back with data. So data flow actually happens in both ways. A classical example of this is a client browser asking for a web page from a web server.


A simple communication over a shared storage:
actors ‘A’ and ‘B’ both read and write from/to data store ‘S’. Effectively communicating over it.


There’s a bit more to this formalism, which you can explore in FMC/TAM website. But not really much more than what’s shown here. These simple primitives already provide a powerful expression mechanism to convey most of the ideas we need to communicate over our system on a daily basis.

Usually, when providing such a diagram, it’s good practice to accompany it with some text that provides some explanation on the different components and their roles. This shouldn’t be more than 1-2 paragraphs, but actually depends on the level of detail and system size.

This would generally help with two things: identifying redundant components, and describing the responsibility of each component clearly. Think of this text explanation as a way to validate your modeling, as displayed in the diagram.

Rule of thumb #2: If your explanation doesn’t include all the actors/stores depicted in the
diagram – you probably have redundant components.

Behavior Modeling

The dynamic behavior of a system is of course no less important than its structure. The cooperation, interaction and data flow between components allow us to identify failure points, bottlenecks, decoupling problems etc. In this case, TAM adopts largely the UML practice of using sequence diagrams or activity diagrams, whose description is beyond the scope of this post.

One thing to keep in mind though, is that when modeling behavior in this case, you’re usually not modeling interaction between classes, but rather between components. So the formalism of “messages” sent between objects need not couple itself to code structure and class/method names. Remember: you generally don’t model the software (code), but rather system components. So you don’t need to model the exact method calls and object instances, as is generally the case with UML models.

One good way to validate the model at this point is to verify that the components mentioned in the activity diagram are mentioned in the system’s structure (in the block diagram); and that components that interact in the behavioral model actually have this interaction expressed in the structural model. A missing interaction (e.g. channel) in the structural view may mean that these two components have an interface that wasn’t expressed in the structural model, i.e. the structure diagram should be fixed; or it could mean that these two components shouldn’t interact, i.e. the behavioral model needs to be fixed.

This is the exact thought process that this modeling helps to achieve – modeling two different facets of the system and validating one with the other in iterations allows us to reason and validate our understanding of the system. The explicit diagrams are simply the visual method that helps us to visualize and capture those ideas efficiently. Of course, keep in mind that you validate the model at the appropriate level of abstraction – don’t validate a high level system structure with a sequence diagram describing implementation classes.

Rule of thumb #3: Every interaction modeled in the behavioral model (activity/sequence
diagrams) should be reflected in the structural model (block diagram), and vice versa.

Domain Modeling

Another often useful aspect of modeling a system is modeling the data processed by the system. It helps to reason about the algorithms, expected load and eventually the structure of the code. This is often the part that’s not covered by well known patterns and needs to be carefully tuned per application. It also helps in creating a shared vocabulary and terminology when discussing different aspects of the developed software.

A useful method in the case of domain modeling is UML class diagrams, which TAM also adopts. In this case as well, I often find a more scaled-down version the most useful, usually focused on the main entities, and their relationships (including cardinality). The useful notation of class diagrams can be leveraged to express these relationships quite succinctly.

Explicit modeling of the code itself is rarely useful in my opinion – the code will probably be refactored way faster than a model will be updated, and a reader who is able to read a detailed class diagram can also read the code it describes. One exception to this rule might be when your application deals with code constructs, in which case the code constructs themselves (e.g. interfaces) serve as the API to your system, and clients will need to write code that integrates with it, as a primary usage pattern of the system. An example for this is an extensible library of any sort (eclipse plugins are one prominent example, but there are more).

Another useful modeling facet in this context is to model the main concepts handled in the system. This is especially useful in very technical systems (oriented at developers), that introduce several new concepts, e.g. frameworks. In this case, a conceptual model can prove to be useful for establishing a shared understanding and terminology for anyone discussing the system.

Iterative Refinement

Of course, at the end of the day, we need to remember that modeling a system in fact reflects a thought process we have when designing the system. The end product, in the form a document (or set of documents) represents our understanding of the system – its structure and behavior. But this is never a one-way process. It is almost always an iterative process that reflects our evolving understanding of the system.

So modeling a specific facet of the system should not be seen as a one-off activity. We often follow a dynamic where we model the structure of the system, but then try to model its behavior, only to realize the structure isn’t sufficient or leads to a suboptimal flow. This back and forth is actually a good thing – it helps us to solidify our understanding and converge on a widely understood and accepted picture of how the system should look, and how it should be constructed.

Refinements also happen on the axis of abstractions. Moving from a high level to a lower level of abstraction, we can provide more details on the system. We can refine as much as we find useful, up to the level of modeling the code (which, as stated above, is rarely useful in my opinion). Also when working on the details of a given view, it’s common to find improvement points and issues in the higher level description. So iterations can happen here as well.

As an example, consider the imaginary travel agency example quoted above. One possible refinement of the structural view could be something like this (also taken from the site above):

Example: travel agency system refined

In this case, more detail is provided on the implementation of the information help subsystem and the ‘Travel Information’ data store. Although providing some more (useful) technical details, this is still a block diagram, describing the structure of the system. This level of detail refines the high level view shown earlier, and already provides more information and insight into how the system is built. For example, how the data stores are implemented and accessed, the way data is adapted and propagated in the system. The acute reader will note that the ‘Reservation System’ subsystem now interacts with the ‘HTTP Server’ component in the ‘Information help desk’ subsystem. This makes sense from a logical point of view – the reservation system accesses the travel information through the same channels used to provide information to other actors, but this information was missing from the first diagram (no channel between the two components).
One important rule of thumb is that as you go down the levels of abstraction, keep the names of actors presented in the higher level of abstraction. This allows readers to correlate the views more easily, identify the different actors, and reason about their place in the system. It provides a context for the more fine granular details. As the example above shows, the more detailed diagram still includes the actor and store names from the higher level diagram (‘Travel Information’, ‘Information help desk’, ‘Travel Agency’).

Rule of thumb #4: Be consistent about names when moving between different levels of abstraction. Enable correlations between the different views.

Communicating w/ Humans – Visualization is Key

With all this modeling activity going on, we have to keep in mind that our main goal, besides good design, is communicating this design to other humans, not machines. This is why, reluctant as we are to admit it (engineers…) – aesthetics matter.

In the context of enterprise systems, communicating the design effectively is as important to the quality of the resulting software as designing it properly. In some cases, it might be even more important – just consider the amount of time you sometime spend on integration of system vs. how much time you spend writing the software itself. So a good looking diagram is important, and we should be mindful about how we present it to the intended audience.

Following are some tips and pointers on what to look for when considering this aspect of communicating our designs. This is by no means an exhaustive list, but more based on experience (and some common sense). More pointers can be found in the links above, specifically in the visualization guide.

First, keep in mind node and visual arrangement of nodes and edges in your diagram immediately lends itself to how clear the diagram is to readers. Try to minimize intersection of edges, and align edges on horizontal and vertical axes.
Compare these two examples:

Aligning vertices

The arrangement on the left is definitely clearer than the one on the right. Note that generally speaking, the size of a node does not imply any specific meaning; it is just a visual convenience.

Similarly, this example:

Visual alignment

shows how the re-arrangement of nodes allows for less intersection, without losing any meaning.

Colors can also be very useful in this case. One can use colors to help distinguish between different levels of containment:

Using colors

In this case, the usage of colors helps to distinguish an otherwise confusing structure. Keep in mind that readers might want to print the document you create on a black and white printer (and color blind) – so use high contrast colors where possible.

Label styles are generally not very useful to convey meaning. Try to stick to a very specific font and be consistent with it. An exception might be a label that pertains to a different aspect, e.g. configuration files or code locations, which might be more easily distinguished when using a different font style.

Visuals have Semantics

One useful way to leverage colors and layout of a diagram is to stress specific semantics you might want to convey in your diagram. One might leverage colors to distinguish a set of components from other components, e.g. highlighting team responsibilities, or highlight specific implementation details. Note that when you use this kind of technique that it is not standard, so remember to include an explanation – a legend – of what the different colors mean. Also, too many colors might cause more clutter, eventually beating the purpose of clarity.

Another useful technique is to use layout of the nodes in the graph for conveying an understanding. For example, depicting the main data flow might be hinted in the block diagram by layouting the nodes from left to right, or top to down. This is not required, nor carries any specific meaning. But it is often useful to use, and provides hints as to how the system actually works.


As we’ve seen, “doing” architecture, while often perceived as a cumbersome and unnecessary activity isn’t hard to do when done effectively. We need to keep in mind the focus of this activity: communicating our designs and reasoning about them over longer periods of time.

Easing the collaboration around design is not just an issue of knowledge sharing (though that’s important as well), but it is a necessity when trying to build software across global teams, over long periods of time. How effectively we communicate our designs directly impacts how we collaborate, the quality of produced software, how we evolve it over time, and eventually the bottom line of deliveries.

I hope this (rather long) post has served to shed some light on the subject, and provide some insight, useful tips and encouraged people to invest some efforts into learning further.

Written by youryblog

January 17, 2015 at 6:27 PM

Posted in SW Design, SW Eng./Dev.

Over 70% of the cost (time) of developing a program goes out after it has been released +

leave a comment »

Thu, 1 Jan 2015

Actually I found that the usually the ones that find it the most fascinating
write the least legible code because they never bother with software
engineering and design.

You can get a high school wiz kid to write the fastest code there is, but
there is no way you will be able to change anything about it five minutes

Considering that over 70% of the cost (time) of developing a program goes
out after it has been released, when changes start to be asked for, that is
a problem.

Micha Feigin
Csail-related mailing list

Interesting view on student's grade: Dear Student: No, I Won’t Change the Grade You Deserve

Written by youryblog

January 2, 2015 at 10:04 PM

Programming without objects 17.09.2014 Permalink

leave a comment »

a copy from

Programming without objects

17.09.2014 PermalinkRecently I gave an itemis internal talk about basic functional programming (FP) concepts. Towards the end I made a claim that objects and their blueprints (a.k.a. classes) have severe downsides and that it is much better to “leave data alone”, which is exactly how Clojure and Haskell treat this subject.

I fully acknowledge that such a statement is disturbing to hear, especially if your professional thinking was shaped by OOP paradigm1 for 20 years or more. I know this well, because that was exactly my situation two years ago.

So let’s take a closer look at the OO class as provided by Java, Scala or C# and think a little bit about its features, strengths and weaknesses. Subsequently, I’ll explain how to do better without classes and objects.

The OO class

An OO class bundles data and implementation related to this data. A good OO class hides any details about implementation and data, if they’re not supposed to be part of the API. According to widespread belief, the good OO class reduces complexity because any client using instances of it (a.k.a the objects) will not and cannot deal with those implementation details.

So, we can see three striking aspects here: API as formed by the public members, data declaration and method implementation. But there is much more to it:

  • Instances of a class have identity by default. Two objects are the same if and only if both share the same address in memory which gives us a relation of intensional equality on their data.
  • The reference to an object is passed as implicit first argument to any method invocation on the object, available through this or self.
  • A class is a type.
  • Inheritance between classes enables subtyping and allows subclasses to reuse method implementations and data declarations of their superclasses.
  • Methods can be “polymorphic” which only means that in order to invoke a method objects do a dispatch over the type of the implicit first argument.
  • Classes can have type variables to generalize their functionality over a range of other types.
  • Classes act as namespaces for inner classes and static members.
  • Classes can support already existing contracts of abstract types (in Java called “interfaces”) by implementing those.

That’s a lot of stuff, and I probably did not list everything that classes provide. The first thing we can state is that a class is not “simple” in the sense that it does only one thing. In fact, it does a lot of things. But so what? After all, the concept is so old and so widely known that we may accept it anyway.

Besides the class being a complex thing there are other disadvantages that most OO programmers are somehow aware of but tend to accept like laws of nature2.

Next I’ll discuss a few of these weaknesses in detail:


Let’s look at intensional equality as provided by object identity. Apparently this is not what we want at all times, otherwise we wouldn’t see this pattern called ValueObject 3. A type like Javas String is a well-known example of a successful ValueObject implementation. The price for this freedom of choice is a rule that the == operator is almost useless and we must use equals because only its implementation can decide what kind of equality relation is in effect. The numerous tutorials and blog posts how to correctly implement equals and hashCode are witness that this topic is not easy to get right. In addition we must be very careful when using a potentially mutable object as a key in a map or item in a set. If it’s not a value a lookup might fail miserably (and unexpectedly).


The conflation of mutable data and object identity is also a problem in another regard: concurrency. Those objects not acting as values must be protected from race conditions and stale memory, either by thread confinement or by synchronization which is hard to do right in order to avoid dead locks or live locks.


Let’s turn to the implicit first argument, in Java and C# it’s referred to by the keyword this. It allows us to invoke methods on an object Sam by stating Sam.doThis() or Sam.doThat() which sounds similar to natural language. So, we can consider this as syntactic sugar to ease understanding in cases where a “subject verb” formulation makes sense4. But wait, what happens if we’re not able to use a method implemented in a class? More recent languages offer extension methods (in Scala realizable using implicit) to help make our notation more consistent. Thus, in order to put the first argument in front of the method name to improve readability we’re required to master an additional concept.


Objects support polymorphism, which was the really exotic feature back then when I learned OOP. Actually, the OO “polymorphic method invocation” is a special case of a function invocation dispatch, namely a dispatch over the type of the implicit first argument. So this fancy p-thing turns out to be a somewhat arbitrary limitation of a very powerful concept.


Implementation inheritance creates a “is-a” relation between types (giving birth to terms like “superclass” and “subclass”), shares API, and hard-wires this with reuse of data declarations and method implementations. But what can you do if you want one without the other? Well, if you don’t want to reuse but only define type relations and share API you can employ interfaces, which is simple and effective. Now the other way around: what can you do to reuse implementations without tying a class by extends to an existing class? The options I know are:

  • Use static methods that expect a type declared by an interface (which might force some data members to become publicly accessible). Here again, extension methods might jump in to reduce syntatic disturbances.
  • Use delegation, so the class in need for existing implementations points to a class containing the shared methods. Each delegating method invokes the implementation from the other class, which creates much boilerplate if the number of methods is high.

Implementation inheritance is a very good example for adding much conceptual complexity without giving us the full power of the ideas it builds upon. In fact, the debate about the potential harm that it brings (and how other approaches might remedy this) is nearly as old as the idea itself.

Encapsulation of data

“Information hiding” makes objects opaque, that’s its purpose. It seems useful if objects contain private mutable data necessary to implement stateful behaviour. However, with respect to concurrency and more algebraic data transformation this encapsulation hurts. It hurts because we don’t know what’s underneath an objects API and the last thing we want to have is mutable state that we’re not aware of. It hurts that if we want to work with its data we have to ask for every single bit of it.

The common solution to the latter problem is to use reflection, and that’s exactly what all advanced frameworks do to uniformly implement their behaviour on top of user-defined classes that contain data.

On the other hand, we all have learned that reflection is dangerous, it circumvents the type system, which may lead to all kinds of errors showing up only at runtime. So most programmers resort to work with objects in a very tedious way, creating many times more code compared to what they would have written in a different language.

Judging the OO class

So far, we have seen that OO classes are not ideal. They mix up a lot of features while often providing only a narrow subset of what a more general concept is able to do for us. Resulting programs are not only hard to do right, in terms of lines of code they tend to require a multiple compared to programs written in modern FP languages. Since system size is a significant cost driver in maintenance, OOP economically seems to me as a mistake when you have other techniques at your disposal.

When it was introduced more than 20 years ago the OO class brought encapsulation and polymorphism into the game which was beneficial. Thus, OOP was clearly better than procedural programming. But a lot of time has passed since then and we learned much about all the little and not-so-little flaws. It’s time to move on…

A better way

Now, that I have discredited the OO class as offered to you by Java, C# or other mainstream OOP languages I feel the duty to show you an alternative as it is available in Clojure and Haskell. It’s based on the following fundamentals:

Data is immutable, and any complex data structure (regardless of being more like a map or a list) behaves like a value: you can’t change it. But you can get new values that share a lot of data with what previously existed. The key to make this approach feasible are so-called “persistent data structures”. They combine immutability with efficiency.

Data is commonly structured using very few types of collections: in Clojure these are vectors, maps and sets, in Haskell you have lists, maps, sets and tuples. To combine a type with a map-like behaviour Clojure gives us the record. Haskell offers record syntax which is preferable over tuples to model more complex domain data. Resulting “instances” of records are immutable and their fields are public.

Functions are not part of records, although the Haskell compiler as well as the Clojure defrecord macro derive some related default implementations for formatting, parsing, ordering and comparing record values. In both languages we can nicely access pieces of these values and cheaply derive new values from existing ones. No one needs to implement constructors, getters, setters, toString, equals or hashCode.

Defining data structures

Here’s an example of some records defined in Haskell:

module Accounting where
data Address = Address { street :: String
                       , city :: String
                       , zipcode :: String} 
             deriving (Show, Read, Eq, Ord)

data Person = Person { name :: String
                     , address :: Address}
            deriving (Show, Read, Eq, Ord)

data Account = Account { owner :: Person
                       , entries :: [Int]}
             deriving (Show, Read, Eq, Ord)

And here’s what it looks like in (untyped) Clojure5:

(ns accounting)
(defrecord Address [street zipcode city])

(defrecord Person [name address])

(defrecord Account [owner entries])      

Let’s see what we now have got:

  • Thread-safe data.
  • Extensional equality.
  • Types.
  • No subtyping.
  • Reasonable default implementations of basic functionality.
  • Complex data based on very few composable collection types.

The last item can hardly be overrated: since we use only few common collection types we can immediatly apply a vast set of data transformation functions like map, filter, reduce/foldl and friends. It needs some practice, but it’ll take you to a whole new level of expressing data transformation logic.

But where should we implement individual domain logic?

Adding domain functionality

If we only want to implement functions that act on some type of data we can pass a value as (for example) first argument to those. To denote the relation between the record and the functions we implement these in the same Clojure namespace or Haskell module that the record definition lives in.

Let’s add a Haskell function that calculates the balance for an account (please note that the Account entries are a list of integers):

balance :: Account -> Int
balance (Account _ es) = foldl (+) 0 es

Now the Clojure equivalent:

(defn balance
  [{es :entries}]
  (reduce + es))

If we now want to call these functions in an OO like “subject verb” manner we would take Clojure’s thread-first macro ->. To get the balance of an account a we can use the expression (-> a balance). Or, given that p is a Person, the expression (-> p :address :city) allows us to retrieve the value of the city field.

In Haskell we need an additional function definition to introduce an operator that allows us to flip the order:

(-:) :: a -> (a -> b) -> b
x -: f = f x

Now we can calculate the balance for an account a using a-:balance, or access the city field with an expression p-:address-:city.

Please note, that the mechanisms allowing us to flip the function-argument-order are totally unrelated to records or any other kind of data structure.

Ok, that was easy. For implementing domain logic we now have

  • Ordinary functions taking the record as parameter.
  • A loose “belongs-to” relation between data and functions via namespace/module organization.
  • Independent “syntactic sugar” regarding how we can denote function application.
  • No hassle when we want to extend functionality related to the data without being able to alter the original namespace/module.

From an OO perspective this looks like static functions on data, something described by the term Anemic Domain Model. I can almost hear that inside any long-term OO practitioners head it starts to scream: “But this is wrong! You must combine data and functions, you must encapsulate this in an entity class!”

After doing this for more than 20 years, I just ask “Why? What’s the benefit? And how is this idea actually applied in thousands of systems? Does it serve us well?” My answer today is a clear “No, it doesn’t”. Actually keeping these things separate is simpler and makes the pieces more composable. Admittedly, to make it significantly better than procedural programming we have to add a few language features you have been missing in OO for so long. To see this read on, now the fun begins:


Both languages allow bundling of function signatures to connect an abstraction with an API. Clojure calls these “protocols”, Haskell calls these “type classes”, both share noticable similarities with Java interfaces. But in contrast to Java, we can declare that types participate in these abstractions, regardless of whether the protocol/type class existed before the record/type declaration or not.

To illustrate this with our neat example, we introduce an abstraction that promises to give us somehow the total sum of some numeric data. Let’s start with the Clojure protocol:

(defprotocol Sum
  (totals [x]))

The corresponding type class in Haskell looks like that:

class Sum a where
  totals :: a -> Int

This lets us introduce polymorphic functions for types without touching the types, thus essentially solving the expression problem, because it let’s us apply existing functions that only rely on protocols/type classes to new types.

In Clojure we can extend the protocol to an existing type like so:

(extend-protocol Sum
  (totals [acc] (balance acc)))

And in Haskell we make an instance:

instance Sum Account where
  totals = balance

It should be easy to spot the similarity. In both cases we implement the missing functionality based on the concrete, existing type. We’re now able to apply totals on any Account instance because Account now supports the contract. What would you do for a similar effect in OO land?6

With this, the story of function implementation reuse becomes very different: protocols/type classes are a means to hide differences between concrete types. This allows us to create a large number of functions that rely solely on abstractions, not concrete types. Instead of each object carrying the methods of its class (and superclasses) around, we have independent functions over small abstractions. Each type of record can decide where it participates in, no matter what existed first. This drastically reduces the overall number of function implementations and thus the size of the resulting system.

Perhaps you now have a first idea of the power of the approach to polymorphism in Clojure and Haskell, but wait, polymorphism is still only a special case of dispatching function invocations.

Other ways for dispatching

Let’s first look from a conceptual perspective what dispatching is all about: you apply a function to some arguments and some mechanism decides which implementation is actually used. So the easiest way to build a runtime dispatcher in an imperative language is a switch statement. What’s the problem with this approach? It’s tedious to write and it conflates branching logic with the actual implementation of the case specific logic.

To come up with a better solution, Haskell and Clojure take very different approaches, but both excel what any OO programmer is commonly used to.

Haskell applies “Pattern Matching” to argument values, which essentially combines deconstruction of data structures, binding of symbols to values, and — here comes the dispatch — branching according to matched patterns in the data structure. It’s already powerful and it can be extended by using “guards” which represent additional conditions. I won’t go into the details here, but if you’re interested I recommend this chapter from “Learn You a Haskell for Great Good”.

Clojure “Multimethods” are completely different. Essentially they consist of two parts. One is a dispatch function, and the other part is a bunch of functions to be chosen from according to the dispatch value that the dispatch function returns for an invocation. The dispatch function can contain any calculation, and in addition it is possible to define hierarchies of dispatch values, similar to what can be done with type based multiple inheritance in OO languages like C++. Again very powerful, and if you’re interested in the details here’s an online excerpt of “Clojure in Action”.

By now, I hope you are able to see that Clojure and Haskell open the door to a different but more powerful way to design systems. The interesting features of the OO class like dispatching and ease of implementation reuse through inheritance fade in the light of more general concepts of todays FP languages.

To complete the picture here’s my discussion of some left-over minor features that OO classes provide, namely type parameters, encapsulation and intensional equality.

Type parameters

In case of Haskell, type parameters are available for types (used in so-called “type constructors”) and type classes. If you know Java Generics then the term “type variable” is a good match for “type parameter”. These can be combined with constraints to restrict the set of admissable types. So, with Haskell you don’t loose anything.

Clojure is basically untyped, therefore the whole concept doesn’t apply. This gives more freedom, but also more responsibility. However, Typed Clojure adds compile-time static type checking, if type annotations were provided. For functions, Typed Clojure offers type parameters. But, AFAIK, and by the time of this writing, Typed Clojure doesn’t offer type parameters for records.


The OO practitioner will have noticed that the very important OO idea of visibility of members seems to be completely missing in Clojure and Haskell. Which is true regarding data.

Restriction of visibility in OO can have two reasons:

  • Foreign code must not rely on a members value because it’s not part of the API and it might be subject to change without warning.
  • Foreign code must not be allowed to change a members value when this could bring the object into an inconsistent state.

As available in todays mainstream OO languages, I don’t see that the first reason is sufficiently covered by technical visibility, because the notion of “foreign” can not be adequately encoded. In fact, an idea like PublishedInterface makes only sense because visibility does not express our intent. Instead of a technical restriction, sheer documentation seems to be a better way to express how reliable a piece of a data structure is.

Regarding the second reason, immutable data and referential transparency is certainly helpful. A system where most of the functions don’t rely on mutable state is less prone to failure caused by erroneous modification of some state. Of course, it is still possible to bring data into an inconsistent state and pass this to functions which signal an exception. But the same is possible for mutable objects, the only difference being that a setter signals an exception a bit earlier. Eventually validation must detect this at the earliest possible point in time to give feedback to the user or a external system. This is true regardless of the paradigm.

Regarding visibility of functions, Haskell as well as Clojure give us a means to make a subset of them available in other modules / namespaces and to protect all others from public usage. Due to its type system Haskell can additionally protect data structures from being used in foreign modules.

Intensional Equality

In a language that consistently uses value semantics there is only extensional equality and the test for equality works everywhere with = or ==. If we need identity in our data then we can resort to the same mechanism that we use for records in a relational database. Either define an id function that makes use of some of the fields or add a surrogate key. Anyway, in a system with minimal mutable state the appetite for object identity diminishes rapidly.

Are there any downsides?

Of course, apart from being different and requiring us to learn some stuff, programming without objects — especially without mutable state — sometimes calls for ideas and concepts that we would never see in OOP7. For example, there is the concept of a zipper to manipulate tree-like data which is dispensable in OO because we can just hold onto a node in a tree in order to modify it efficiently in place. Or, you can’t create cyclic dependencies in data structures. In my experience, you seldom encounter these restrictions, and either detect that your idea was crap anyway, or that there is an unexpected but elegant solution available.

A system without side-effects is a useless thing. So real-life functional languages offer ways to enable them. By being either forced by the compiler to wrap side-effects (as in Haskell) or by being punished by obstrusive API calls (as in Clojure) we put side-effects at specific places and handle them with great care. This affects the structure of the system. But it is a good thing, because having referential transparency in large parts of a code base is a breeze when it comes to testing, parallelization or simply understanding what code does.

You may argue that imperative programming leads to more efficient programs compared to functional programming. You’re right. Manual memory management is also likely to be more efficient than relying on a garbage collector. If done correctly. Is there anyone on the JVM missing manual memory management? No? So, there is certainly a price. But modern multi-core hardware as available in enterprise computing is not as expensive as precious programmer time.


This was really a long journey. I started by breaking down and analyzing the features that the good old OO class offers. I continued with showing how you can do the same and more using a modern FP language. I know, it looks strange and I promise it feels strange… but only in the beginning. In the end you’re much better off learning how to use the power of a modern language like Clojure or Haskell.

1. Here, I refer to OO concepts as available today in the mainstream. Initially, objects were planned to be more like autonomous cells that exchange messages, very similar to what the Actor model provides. So in a sense, what we think of today as OOP is not what its inventors thought it should be. 2. Which is of course only true, if OOP languages are all you know. Read about the Blub Paradox to get an idea why it is so hard to pick up totally different approaches to programming. 3. Regarding equality of data the term “ValueObject” is IMO an outright misnomer. A Value supports extensional equality whereas an Object has identity and therefore supports intensional equality. The term “Value” alone would have been better. 4. This “subject verb” order isn’t appropriate in every situation, as Steve Yegge enjoyably points out in his famous blog post Execution in the Kingdom of Nouns. 5. If you are concerned about missing type annotations in Clojure records: There exists an optional type system called Typed Clojure that allows to add compile-time static type checking. Alternatively you can use libraries like Domaintypes or Prismatic Schema to add type-like notation and get corresponding runtime validation. 6. What can we do in OO to add methods contained in a new interface to an existing type without touching the types implementation? Use the Adapter pattern, if you’re lucky to influence how instances are created, and if you’re able to retain a reasonable equality relation between wrapped and unwrapped instances. Good luck! 7. No, I don’t refer to Monads. Java 8 introduced Optional which is only a flavor of the Haskell Maybe Monad. So there is evidence that the idea is useful in OOP.

Written by youryblog

September 30, 2014 at 9:14 PM

OC’s Computer Information Systems degree answers employer and graduate needs

leave a comment »

 Chris Kluka
Choosing the Bachelor of Computer Information Systems degree program at Okanagan College was a no-brainer for Chris Kluka – and it has been a decision that paid off in spades with career opportunities.

Kluka had taken post-secondary studies at other Canadian institutions, but the credential and education he received didn’t fully meet his needs or expectations.

“I’m interested in infrastructure and systems management,” says Kluka, who is now an IT Systems Infrastructure Architect at Daemon Defense Systems Inc. in Winnipeg.

“I looked at programs across the country and chose Okanagan College. The other program I took and others I looked at had the wrong focus. They were focused on Programming or Computer Science. I wanted a program focused on IT systems implementation and management,” he says.

With the benefit of the College giving him transfer credits for much of his post-secondary education taken elsewhere, Kluka entered the Computer Information Systems (CIS) diploma program at Okanagan College. The CIS diploma is a two-year credential that ladders into the College’s four-year Bachelor of CIS degree. At the College, he was also able to integrate some courses from the Network and Telecommunications Engineering Technology program as electives.

Between diploma and degree, Kluka found work with a Kelowna-based company, FormaShape, where he started as a junior network administrator. Eight months later he was IT Manager. Then he came back for his degree.

After graduation, it was a return to Manitoba, where career opportunities have been unfolding. For the past two years, he has been with Daemon Defense Systems Inc. and the contracts the company has secured have afforded him considerable experience in a variety of environments.

“I’ve been leading architecture design and deployment in projects such as the Canadian Museum of Human Rights, network redevelopment in the Winnipeg Convention Centre and the Investors Group Field, home of the Winnipeg Blue Bombers. Those three projects alone represent 6,300 network drops and $20 million worth of servers and storage architecture. I have designed and implemented the IT systems architecture for three of the largest projects in the province in the last two years. ”

The College’s degree program has a solid reputation among employers, explains Department Chair Rick Gee. Demand for graduates may also partially explain the high ratings given the program by students in independent surveys conducted by the Provincial government. A review of five years of graduate data shows a 94 per cent employment rate, average annual earnings of $56,000 and 91 per cent of surveyed students reporting they were satisfied or very satisfied with their education.

“There will be continued demand for diploma and degree graduates from our programs,” says Gee. “Our lives are becoming increasingly dependent on information systems, and that bodes well for the people who can understand and manage them.”

For more information on the degree or diploma programs in Computer Information Systems, visit

Written by youryblog

September 6, 2014 at 2:26 AM

Re-Post: The End of Agile: Death by Over-Simplification

with one comment

The End of Agile: Death by Over-Simplification

Copy for my students from:

(I afraid to lose it) Posted on by

hypeThere is something basically wrong with the current adoption of Agile methods. The term Agile was abused, becoming the biggest ever hype in the history of software development, and generating a multi-million dollar industry of self-proclaimed Agile consultants and experts selling dubious certifications. People forgot the original Agile values and principles, and instead follow dogmatic processes with rigid rules and rituals.

But the biggest sin of Agile consultants was to over-simplify the software development process and underestimate the real complexity of building software systems. Developers were convinced that the software design may naturally emerge from the implementation of simple user stories, that it will always be possible to pay the technical debt in the future, that constant refactoring is an effective way to produce high-quality code and that agility can be assured by following strictly an Agile process. We will discuss each one of these myths below.

Myth 1: Good design will emerge from the implementation of user stories

Agile development teams follow an incremental development approach, in which a small set of user stories is implemented in each iteration. The basic assumption is that a coherent system design will naturally emerge from these independent stories, requiring at most some refactoring to sort out the commonalities.

However, in practice the code does not have this tendency to self-organize. The laws governing the evolution of software systems are that of increasing entropy. When we add new functionality the system tends to become more complex. Thus, instead of hoping for the design to emerge, software evolution should be planned through a high-level architecture including extension mechanisms.

Myth 2: It will always be possible to pay the technical debt in the future

The metaphor of technical debt became a popular euphemism for bad code. The idea of incurring some debt appears much more reasonable than deliberately producing low-quality implementations. Developers are ready to accumulate technical debt because they believe they will be able to pay this debt in the future.

However, in practice it is not so easy to pay the technical debt. Bad code normally comes together with poor interfaces and inappropriate separation of concerns. The consequence is that other modules are built on top of the original technical debt, creating dependencies on the simplistic design decisions that should be temporary. When eventually someone decides to pay the technical debt, it is already too late: the fix became too expensive.

Myth 3: Constant refactoring is an effective way to produce code

Refactoring became a very popular activity in software development; after all it is always focused on improving the code. Techniques such as Test-Driven Development (TDD) allow refactoring to be performed at low risk, since the unit tests automatically indicate if some working logic has been broken by code changes.

However, in practice refactoring is consuming an exaggerated amount of the efforts invested in software development. Some developers simply do not plan for change, in the belief that it will always be easy to refactor the system. The consequence is that some teams implement new features very fast in the first iterations, but at some point their work halts and they start spending most of their efforts in endless refactorings.

Myth 4: Agility can be assured by following an Agile process

One of the main goals of Agility is to be able to cope with change. We know that nowadays we must adapt to a reality in which system requirements may be modified unexpectedly, and Agile consultants claim that we may achieve change-resilience by adhering strictly to their well-defined processes.

However, in practice the process alone is not able to provide change-resilience. A software development team will only be able to address changing system requirements if the system was designed to be flexible and adaptable. If the original design did not take in consideration the issues of maintainability and extensibility, the developers will not succeed in incorporating changes, not matter how Agile is the development process.

Agile is Dead, Now What?

If we take a look at the hype chart below, it is sure that regarding Agile we are after the “peak of inflated expectations” and getting closer to the “trough of disillusionment”.


Several recent articles have proclaimed the end of the Agile hype. Dave Thomas wrote that “Agile is Dead”, and was immediately followed by an “Angry Developer Version”. Tim Ottinger wrote “I Want Agile Back”, but Bob Marshall replied that “I Don’t Want Agile Back”. Finally, what was inevitable just happened: “The Anti-Agile Manifesto”.

Now the question is: what will guide Agile through the “slope of enlightenment”?

In my personal opinion, we will have to go back to the basics: To all the wonderful design fundamentals that were being discussed in the 90’s: the SOLID principles of OOD, design patterns, software reuse, component-based software development. Only when we are able to incorporate these basic principles in our development process we will reach a true state of Agility, embracing change effectively.

Another question: what will be the next step in the evolution of software design?

In my opinion: Antifragility. But this is the subject for a future post

What about you? Did you also experience the limitations of current Agile practices? Please share with us in the comments below.

Written by youryblog

August 29, 2014 at 3:07 PM

Multiple SVN repositories discussion

leave a comment »


Personally I would definitely prefer separate repository per project. There are several reasons:

  1. Revision numbers. Each project repository will have separate revisions sequence.
  2. Granularity. With repository per project you just can’t make a commit into different projects having the same revision number. I assume this more as advantage, while someone would say that it is a flaw.
  3. Repository size. How large is your project? Does it have binaries under source control? I bet it has. Therefore, size is important – each revision of binary file increases size of the repository. Eventually it becomes clumsy and it’s hard to support. Fine-grained policy of binary files storage should be supported, and additional administration provided. As for me, I still can’t find how could I completely delete binary file (committed by some stupid user) and its contents history from repository. With repository per project it would be easier.
  4. Inner repository organization. I prefer fine-grained, very organized, self contained, structured repositories. There is a diagram illustrating general (ideal) approach of repository maintenance process. I think you would agree that it is just NOT POSSIBLE to use ‘all projects in one repo’ approach. For example, my initial structure of repository (every project repository should have) is:
  5. Repo administration. ‘Repository per project’ has more possibilities in users access configuration. It is more complex though. But there is also helpful feature: you can configure repositories to use the same config file
  6. Repo Support. I prefer backing up repositories separately. Somebody says that in this case it is not possible to merge info from one repo into the other. Why on earth you would need that? The case when such merge is required shows that initial approach to source control is wrong. Evolution of the project assumes subsequent project separation into submodules, not the other way. And I know that there is approach to do that.
  7. svn:externals. ‘Repository per project’ approach encourages svn:externals usage. That is a healthy situation when dependencies between project and submodules is established via soft links, which svn:externals is.Conclusion. If you want to keep source control simple, use one repository. If you want to make software configuration management RIGHT:
    1. Use ‘repository per project’ approach
    2. Introduce separate role of software configuration manager and assign team member to it
    3. Hire good administrator, who can cope with all subversion tricks 🙂

PS. By the way, in spite I work with SVN all the time and I like it, ‘repository per project’ approach is why I see DCVS systems more attractive from the repository organization point of view. In DCVS repo is the project by default. There is even no question ‘single vs multiple’ possible, it would be just nonsense.

(see some related posts:

Written by youryblog

January 31, 2014 at 10:35 PM

Posted in IT, SW Eng./Dev.