Youry's Blog

Youry's Blog

Archive for the ‘SW Design’ Category

Effective System Modeling

leave a comment »

Effective System Modeling (from http://schejter.me/effective-system-modeling/)

This post is a rough “transcript” (with some changes and creative freedom) of a session I gave in the Citi Innovation Lab, TLV about how to effectively model a system.

A Communication Breakdown?

Building complex software systems is not an easy task, for a lot of reasons. All kinds of solutions have been invented to tackle the different issues. We have higher level programming languages, DB tools, agile project management methodologies and quite a bit more. One could argue that these problems still exist, and no complete solution has been found so far. That may be true, but in this post, I’d like to discuss a different problem in this context: communicating our designs.

One problem that seems to be overlooked or not addressed well enough, is the issue of communicating our designs and system architecture. By ourselves, experienced engineers are (usually) quite capable of coming up with often elegant solutions to complex problems. But the realities and dynamics of a software development organization, especially a geographically distributed one, often require us to communicate and reason about systems developed by others.

We – software engineers – tend to focus on solving the technical issues or designing the systems we’re building. This often leads to forgetting that software development, especially in the enterprise, is often, if not always, a team effort. Communicating our designs is therefore critical to our success, but is often viewed as a negligible activity at best, if not a complete waste of time.

The agile development movement, in all its variants, has done some good to bring the issues of cooperation and communication into the limelight. Still, I often find that communication of technical details – structure and behavior of systems, is poorly done.

Why is that?

“Doing” Architecture

A common interpretation of agile development methods I often encounter tends to spill the baby with the water. I hear about people/teams refusing to do “big up-front design”. That in itself is actually a good thing in my opinion. The problem starts when this translates to no design at all, and this immediately translates into not wanting to spend time on documenting your architecture properly, or how it’s communicated.

But as anyone who’s been in this industry for more than a day knows – there’s no replacement for thinking about your design and your system, and agile doesn’t mean we shouldn’t design our system. So I claim that the problem isn’t really with designing per-se, but rather in the motivation and methodology we use for “doing” our architecture – how we go about designing the system and conveying our thoughts. Most of us acknowledge the importance of thinking about a system, but we do not invest the time in preserving that knowledge and discussion. Communicating a design or system architecture, especially in written form, is often viewed as superfluous, given the working code and its accompanying tests. From my experience this is often the case because the actual communication and documentation of a design are done ineffectively.

This was also strengthened after hearing Simon Brown talk about a similar subject, one which resonated with me. An architecture document/artifact should have “just enough” up front design to understand the system and create a shared vision. An architecture document should augment the code, not repeat it; it should describe what the code doesn’t already describe. In other words – don’t document the code, but rather look for the added value. A good architecture/design document adds value to the project team by articulating the vision on which all team members need to align on. Of course, this is less apparent in small teams than in large ones, especially teams that need to cooperate on a larger project.

As a side note I would like to suggest that besides creating a shared understanding and vision, an architecture document also helps in preserving the knowledge and ramping-up people onto the team. I believe that anyone who has tried learning a new system just by looking at its code will empathize with this.

Since I believe the motivation to actually design the system and solve the problem is definitely there, I’m left with the feeling that people often view the task of documenting it and communicating it as unnecessary “bureaucracy”.
We therefore need a way to communicate and document our system’s architecture effectively. A way that will allow us to transfer knowledge, over time and space (geographies), but still do it efficiently – both for the writer and readers.
It needs to be a way that captures the essence of the system, without drowning the reader in details, or burden the writer with work that will prove to be a waste of time. Looking at it from a system analysis point of view, then reading the document is quite possibly the more prominent use case, compared to writing it; i.e. the document is going to be read a lot more than written/modified.

When we come to the question of modeling a system, with the purpose of the end result being readable by humans, we need to balance the amount of formalism we apply to the model. A rigorous modeling technique will probably result in a more accurate model, but not necessarily an easily understandable one. Rigorous documents tend to be complete and accurate, but exhausting to read and follow; thereby beating the purpose we’re trying to achieve. At the other end of the scale are free text documents, often in English and sometimes with some scribbled diagrams, which explain the structure or behavior of system, often inconsistently. These are hard to follow for different reasons: inaccurate language, inconsistent terminology and/or ad-hoc (=unfamiliar) modeling technique used.

Providing an easy to follow system description, and doing so efficiently, requires us to balance these two ends. We need to have a “just enough” formalism that provides a common language. It needs to be intuitive to write and read, with enough freedom to provide any details needed to get a complete picture, but without burdening the writers and readers with unnecessary details.
In this post, I try to give an overview and pointers to a method I found useful in the past (not my invention), and that I believe answers the criteria mentioned above. It is definitely not the only way and may not suit everyone’s taste (e.g. Simon Brown suggests something similar but slightly different); but regardless of the method used, creating a shared vision, and putting it to writing is something useful, when done effectively.

System != Software

Before going into the technicalities of describing a system effectively, I believe we need to make the distinction between a system and its software.

For the purposes of our discussion, we’ll define software as a computer-understandable description of a dynamic system; i.e. one way to code the structure and behavior of a system in a way that’s understandable by computers.
A (dynamic) system on the other hand is what emerges from the execution of software.

To understand the distinction, an analogy might help: consider the task of understanding the issue of global warming (the system) vs. understanding the structure of a book about global warming (the software).

  • Understanding the book structure does not imply understanding global warming. Similarly, understanding the software structure doesn’t imply understanding the system.
  • The book can be written in different languages, but it’s still describing global warming. Similarly, software can be implemented using different languages and tools/technologies, but it doesn’t (shouldn’t) change the emergent behavior of the system.
  • Reading the content of the book implies understanding global warming. Similarly, the system is what emerges from execution of the software.

One point we need to keep in mind, and where this analogy breaks, is that understanding a book’s structure is considerably easier than understanding the software written for a given system.
So usually, when confronted with the need to document our system, we tend to focus on documenting the software, not the system. This leads to ineffective documentation/modeling (we’re documenting the wrong thing), eventually leading to frustration and missing knowledge.
This is further compounded by the fact that existing tools and frameworks for documentation of software (e.g. UML) tend to be complex and detailed, and with the tools emphasizing code generation, and not human communication; this is especially true for UML.

Modeling a System

When we model an existing system, or design a new one, we find several methods and tools that help us. A lot of these methods define all sorts of views of the system – describing different facets of its implementation. Most practitioners have surely met one or more different “types” of system views: logical, conceptual, deployment, implementation, high level, behavior, etc. These all provide some kind of information as to how the system is built, but there’s not a lot of clarity on the differences or roles of each such view. These are essentially different abstractions or facets of the given system being modeled. While any such abstraction can be justified in itself, it is the combination of these that produces an often unreadable end result.

So, as with any other type of technical document you write, the first rule of thumb is:

Rule of thumb #1: Tailor the content to the reader(s), and be explicit about it.

In other words – set expectations. Set the expectation early on – what you’re describing and what is the expected knowledge (and usually technical competency) of the reader.

Generally, in my experience, 3 main facets are the most important ones: the structure of the system – how it’s built, the behavior of the system – how the different component interact on given inputs/events, and the domain model used in the system. Each of these facets can be described in more or less detail, at different abstraction levels, and using different techniques, depending on the case. But these are usually the most important facets for a reader to understand the system and approach the code design itself, or reading the code.

Technical Architecture Modeling

One method I often find useful is that of Technical Architecture Modeling (TAM), itself a derivative of Fundamental Modeling Concepts (FMC). It is a formal method, but one which focuses on human comprehension. As such, it borrows from UML and FMC, to provide a level of formalism which seems to strike a good balance between readability and modeling efficiency. TAM uses a few diagram types, where the most useful are the component/block diagram used to depict a system’s structure or composition; the activity and sequence diagrams used to model a system/component’s behavior and the class diagram used to model a domain (value) model. In addition, other diagram types are also included, e.g. state charts and deployment
diagrams; but these are less useful in my experience. In addition, TAM also has some tool support in the form of Visio stencils that make it easier to integrate this into other documentation methods.

I briefly discuss how the most important facets of a system can be modeled with TAM, but the reader is encouraged to follow the links given above (or ask me) for further information and details.

Block Diagram: System Structure

A system’s structure, or composition, is described using a simple block diagram. At its simplest form, this diagram describes the different components that make up the system.
For example, describing a simple travel agency system, with a reservation and information system can look something like this (example taken from the FMC introduction):

Sample: Travel Agency System

This in itself already tells us some of the story: there’s a travel agency system, accessed by customers and other interested parties, with two subsystems: a reservation system and an information help desk system. The information is read and written to two separate data stores holding the customer data and reservations in one store, and the travel information (e.g. flight and hotel information) in the other. This data is fed into the system by external travel-related organizations (e.g. airlines, hotel chains), and reservations are forwarded to the same external systems.

This description is usually enough to provide at least a contextual high level information of the system. But the diagram above already tells us a bit more. It provides us some information about the access points to the data; about the different kinds of data flowing in the system, and what component is interacting with what other component (who knows who). Note that there is little to no technical information at this point.

The modeling language itself is pretty straightforward and simple as well: we have two main “entities”: actors and data stores.
Actors, designated by square rectangles, are any components that do something in the system (also humans). They are they active components of the system. Actors communicate with other actors through channels (lines with small circles on them), and the read/write from/to data stores (simple lines with arrow heads). Examples include services, functions and human operators of the system.
Data store, designated by round rectangles (/circles), are passive components. These are “places” where data is stored. Examples include database systems, files, and even memory arrays (or generally any data structure).

Armed with these definitions, we can already identify some useful patterns, and how to model them:

Read only access – actor A can only read from data store S:
Read only access

 

Write only access – actor A can only write to data store S:
Write only access

 

Read/Write access:
Read/Write access

 

Two actors communicating on a request/response channel have their own unique symbol:
effective-system-modeling-004
In this case, actor ‘B’ requests something from actor ‘A’ (the arrow on the ‘R’ symbol points to  ‘A’), and ‘A’ answers back with data. So data flow actually happens in both ways. A classical example of this is a client browser asking for a web page from a web server.

 

A simple communication over a shared storage:
effective-system-modeling-005
actors ‘A’ and ‘B’ both read and write from/to data store ‘S’. Effectively communicating over it.

 

There’s a bit more to this formalism, which you can explore in FMC/TAM website. But not really much more than what’s shown here. These simple primitives already provide a powerful expression mechanism to convey most of the ideas we need to communicate over our system on a daily basis.

Usually, when providing such a diagram, it’s good practice to accompany it with some text that provides some explanation on the different components and their roles. This shouldn’t be more than 1-2 paragraphs, but actually depends on the level of detail and system size.

This would generally help with two things: identifying redundant components, and describing the responsibility of each component clearly. Think of this text explanation as a way to validate your modeling, as displayed in the diagram.

Rule of thumb #2: If your explanation doesn’t include all the actors/stores depicted in the
diagram – you probably have redundant components.

Behavior Modeling

The dynamic behavior of a system is of course no less important than its structure. The cooperation, interaction and data flow between components allow us to identify failure points, bottlenecks, decoupling problems etc. In this case, TAM adopts largely the UML practice of using sequence diagrams or activity diagrams, whose description is beyond the scope of this post.

One thing to keep in mind though, is that when modeling behavior in this case, you’re usually not modeling interaction between classes, but rather between components. So the formalism of “messages” sent between objects need not couple itself to code structure and class/method names. Remember: you generally don’t model the software (code), but rather system components. So you don’t need to model the exact method calls and object instances, as is generally the case with UML models.

One good way to validate the model at this point is to verify that the components mentioned in the activity diagram are mentioned in the system’s structure (in the block diagram); and that components that interact in the behavioral model actually have this interaction expressed in the structural model. A missing interaction (e.g. channel) in the structural view may mean that these two components have an interface that wasn’t expressed in the structural model, i.e. the structure diagram should be fixed; or it could mean that these two components shouldn’t interact, i.e. the behavioral model needs to be fixed.

This is the exact thought process that this modeling helps to achieve – modeling two different facets of the system and validating one with the other in iterations allows us to reason and validate our understanding of the system. The explicit diagrams are simply the visual method that helps us to visualize and capture those ideas efficiently. Of course, keep in mind that you validate the model at the appropriate level of abstraction – don’t validate a high level system structure with a sequence diagram describing implementation classes.

Rule of thumb #3: Every interaction modeled in the behavioral model (activity/sequence
diagrams) should be reflected in the structural model (block diagram), and vice versa.

Domain Modeling

Another often useful aspect of modeling a system is modeling the data processed by the system. It helps to reason about the algorithms, expected load and eventually the structure of the code. This is often the part that’s not covered by well known patterns and needs to be carefully tuned per application. It also helps in creating a shared vocabulary and terminology when discussing different aspects of the developed software.

A useful method in the case of domain modeling is UML class diagrams, which TAM also adopts. In this case as well, I often find a more scaled-down version the most useful, usually focused on the main entities, and their relationships (including cardinality). The useful notation of class diagrams can be leveraged to express these relationships quite succinctly.

Explicit modeling of the code itself is rarely useful in my opinion – the code will probably be refactored way faster than a model will be updated, and a reader who is able to read a detailed class diagram can also read the code it describes. One exception to this rule might be when your application deals with code constructs, in which case the code constructs themselves (e.g. interfaces) serve as the API to your system, and clients will need to write code that integrates with it, as a primary usage pattern of the system. An example for this is an extensible library of any sort (eclipse plugins are one prominent example, but there are more).

Another useful modeling facet in this context is to model the main concepts handled in the system. This is especially useful in very technical systems (oriented at developers), that introduce several new concepts, e.g. frameworks. In this case, a conceptual model can prove to be useful for establishing a shared understanding and terminology for anyone discussing the system.

Iterative Refinement

Of course, at the end of the day, we need to remember that modeling a system in fact reflects a thought process we have when designing the system. The end product, in the form a document (or set of documents) represents our understanding of the system – its structure and behavior. But this is never a one-way process. It is almost always an iterative process that reflects our evolving understanding of the system.

So modeling a specific facet of the system should not be seen as a one-off activity. We often follow a dynamic where we model the structure of the system, but then try to model its behavior, only to realize the structure isn’t sufficient or leads to a suboptimal flow. This back and forth is actually a good thing – it helps us to solidify our understanding and converge on a widely understood and accepted picture of how the system should look, and how it should be constructed.

Refinements also happen on the axis of abstractions. Moving from a high level to a lower level of abstraction, we can provide more details on the system. We can refine as much as we find useful, up to the level of modeling the code (which, as stated above, is rarely useful in my opinion). Also when working on the details of a given view, it’s common to find improvement points and issues in the higher level description. So iterations can happen here as well.

As an example, consider the imaginary travel agency example quoted above. One possible refinement of the structural view could be something like this (also taken from the site above):

Example: travel agency system refined

In this case, more detail is provided on the implementation of the information help subsystem and the ‘Travel Information’ data store. Although providing some more (useful) technical details, this is still a block diagram, describing the structure of the system. This level of detail refines the high level view shown earlier, and already provides more information and insight into how the system is built. For example, how the data stores are implemented and accessed, the way data is adapted and propagated in the system. The acute reader will note that the ‘Reservation System’ subsystem now interacts with the ‘HTTP Server’ component in the ‘Information help desk’ subsystem. This makes sense from a logical point of view – the reservation system accesses the travel information through the same channels used to provide information to other actors, but this information was missing from the first diagram (no channel between the two components).
One important rule of thumb is that as you go down the levels of abstraction, keep the names of actors presented in the higher level of abstraction. This allows readers to correlate the views more easily, identify the different actors, and reason about their place in the system. It provides a context for the more fine granular details. As the example above shows, the more detailed diagram still includes the actor and store names from the higher level diagram (‘Travel Information’, ‘Information help desk’, ‘Travel Agency’).

Rule of thumb #4: Be consistent about names when moving between different levels of abstraction. Enable correlations between the different views.

Communicating w/ Humans – Visualization is Key

With all this modeling activity going on, we have to keep in mind that our main goal, besides good design, is communicating this design to other humans, not machines. This is why, reluctant as we are to admit it (engineers…) – aesthetics matter.

In the context of enterprise systems, communicating the design effectively is as important to the quality of the resulting software as designing it properly. In some cases, it might be even more important – just consider the amount of time you sometime spend on integration of system vs. how much time you spend writing the software itself. So a good looking diagram is important, and we should be mindful about how we present it to the intended audience.

Following are some tips and pointers on what to look for when considering this aspect of communicating our designs. This is by no means an exhaustive list, but more based on experience (and some common sense). More pointers can be found in the links above, specifically in the visualization guide.

First, keep in mind node and visual arrangement of nodes and edges in your diagram immediately lends itself to how clear the diagram is to readers. Try to minimize intersection of edges, and align edges on horizontal and vertical axes.
Compare these two examples:

Aligning vertices

The arrangement on the left is definitely clearer than the one on the right. Note that generally speaking, the size of a node does not imply any specific meaning; it is just a visual convenience.

Similarly, this example:

Visual alignment

shows how the re-arrangement of nodes allows for less intersection, without losing any meaning.

Colors can also be very useful in this case. One can use colors to help distinguish between different levels of containment:

Using colors

In this case, the usage of colors helps to distinguish an otherwise confusing structure. Keep in mind that readers might want to print the document you create on a black and white printer (and color blind) – so use high contrast colors where possible.

Label styles are generally not very useful to convey meaning. Try to stick to a very specific font and be consistent with it. An exception might be a label that pertains to a different aspect, e.g. configuration files or code locations, which might be more easily distinguished when using a different font style.

Visuals have Semantics

One useful way to leverage colors and layout of a diagram is to stress specific semantics you might want to convey in your diagram. One might leverage colors to distinguish a set of components from other components, e.g. highlighting team responsibilities, or highlight specific implementation details. Note that when you use this kind of technique that it is not standard, so remember to include an explanation – a legend – of what the different colors mean. Also, too many colors might cause more clutter, eventually beating the purpose of clarity.

Another useful technique is to use layout of the nodes in the graph for conveying an understanding. For example, depicting the main data flow might be hinted in the block diagram by layouting the nodes from left to right, or top to down. This is not required, nor carries any specific meaning. But it is often useful to use, and provides hints as to how the system actually works.

Summary

As we’ve seen, “doing” architecture, while often perceived as a cumbersome and unnecessary activity isn’t hard to do when done effectively. We need to keep in mind the focus of this activity: communicating our designs and reasoning about them over longer periods of time.

Easing the collaboration around design is not just an issue of knowledge sharing (though that’s important as well), but it is a necessity when trying to build software across global teams, over long periods of time. How effectively we communicate our designs directly impacts how we collaborate, the quality of produced software, how we evolve it over time, and eventually the bottom line of deliveries.

I hope this (rather long) post has served to shed some light on the subject, and provide some insight, useful tips and encouraged people to invest some efforts into learning further.


Advertisements

Written by youryblog

January 17, 2015 at 6:27 PM

Posted in SW Design, SW Eng./Dev.

Over 70% of the cost (time) of developing a program goes out after it has been released +

leave a comment »

Thu, 1 Jan 2015

Actually I found that the usually the ones that find it the most fascinating
write the least legible code because they never bother with software
engineering and design.

You can get a high school wiz kid to write the fastest code there is, but
there is no way you will be able to change anything about it five minutes
later.

Considering that over 70% of the cost (time) of developing a program goes
out after it has been released, when changes start to be asked for, that is
a problem.

Micha Feigin
_______________________________________________
Csail-related mailing list


Interesting view on student's grade: Dear Student: No, I Won’t Change the Grade You Deserve https://chroniclevitae.com/news/908-dear-student-no-i-won-t-change-the-grade-you-deserve?cid=VTEVPMSED1

Written by youryblog

January 2, 2015 at 10:04 PM

Re-Post: The End of Agile: Death by Over-Simplification

with one comment

The End of Agile: Death by Over-Simplification

Copy for my students from: http://effectivesoftwaredesign.com/2014/03/17/the-end-of-agile-death-by-over-simplification/

(I afraid to lose it) Posted on by

hypeThere is something basically wrong with the current adoption of Agile methods. The term Agile was abused, becoming the biggest ever hype in the history of software development, and generating a multi-million dollar industry of self-proclaimed Agile consultants and experts selling dubious certifications. People forgot the original Agile values and principles, and instead follow dogmatic processes with rigid rules and rituals.

But the biggest sin of Agile consultants was to over-simplify the software development process and underestimate the real complexity of building software systems. Developers were convinced that the software design may naturally emerge from the implementation of simple user stories, that it will always be possible to pay the technical debt in the future, that constant refactoring is an effective way to produce high-quality code and that agility can be assured by following strictly an Agile process. We will discuss each one of these myths below.

Myth 1: Good design will emerge from the implementation of user stories

Agile development teams follow an incremental development approach, in which a small set of user stories is implemented in each iteration. The basic assumption is that a coherent system design will naturally emerge from these independent stories, requiring at most some refactoring to sort out the commonalities.

However, in practice the code does not have this tendency to self-organize. The laws governing the evolution of software systems are that of increasing entropy. When we add new functionality the system tends to become more complex. Thus, instead of hoping for the design to emerge, software evolution should be planned through a high-level architecture including extension mechanisms.

Myth 2: It will always be possible to pay the technical debt in the future

The metaphor of technical debt became a popular euphemism for bad code. The idea of incurring some debt appears much more reasonable than deliberately producing low-quality implementations. Developers are ready to accumulate technical debt because they believe they will be able to pay this debt in the future.

However, in practice it is not so easy to pay the technical debt. Bad code normally comes together with poor interfaces and inappropriate separation of concerns. The consequence is that other modules are built on top of the original technical debt, creating dependencies on the simplistic design decisions that should be temporary. When eventually someone decides to pay the technical debt, it is already too late: the fix became too expensive.

Myth 3: Constant refactoring is an effective way to produce code

Refactoring became a very popular activity in software development; after all it is always focused on improving the code. Techniques such as Test-Driven Development (TDD) allow refactoring to be performed at low risk, since the unit tests automatically indicate if some working logic has been broken by code changes.

However, in practice refactoring is consuming an exaggerated amount of the efforts invested in software development. Some developers simply do not plan for change, in the belief that it will always be easy to refactor the system. The consequence is that some teams implement new features very fast in the first iterations, but at some point their work halts and they start spending most of their efforts in endless refactorings.

Myth 4: Agility can be assured by following an Agile process

One of the main goals of Agility is to be able to cope with change. We know that nowadays we must adapt to a reality in which system requirements may be modified unexpectedly, and Agile consultants claim that we may achieve change-resilience by adhering strictly to their well-defined processes.

However, in practice the process alone is not able to provide change-resilience. A software development team will only be able to address changing system requirements if the system was designed to be flexible and adaptable. If the original design did not take in consideration the issues of maintainability and extensibility, the developers will not succeed in incorporating changes, not matter how Agile is the development process.

Agile is Dead, Now What?

If we take a look at the hype chart below, it is sure that regarding Agile we are after the “peak of inflated expectations” and getting closer to the “trough of disillusionment”.

gartner-hype-cycle

Several recent articles have proclaimed the end of the Agile hype. Dave Thomas wrote that “Agile is Dead”, and was immediately followed by an “Angry Developer Version”. Tim Ottinger wrote “I Want Agile Back”, but Bob Marshall replied that “I Don’t Want Agile Back”. Finally, what was inevitable just happened: “The Anti-Agile Manifesto”.

Now the question is: what will guide Agile through the “slope of enlightenment”?

In my personal opinion, we will have to go back to the basics: To all the wonderful design fundamentals that were being discussed in the 90’s: the SOLID principles of OOD, design patterns, software reuse, component-based software development. Only when we are able to incorporate these basic principles in our development process we will reach a true state of Agility, embracing change effectively.

Another question: what will be the next step in the evolution of software design?

In my opinion: Antifragility. But this is the subject for a future post

What about you? Did you also experience the limitations of current Agile practices? Please share with us in the comments below.

Written by youryblog

August 29, 2014 at 3:07 PM

Oracle APEX (some issues and solutions)

leave a comment »

Some good papers and solutions

  1. Oracle Application Express Deployment http://www.oracle.com/technetwork/developer-tools/apex/application-express/apex-deploy-installation-1878444.html
  2. Oracle Application Express 4.2 Downloads http://www.oracle.com/technetwork/developer-tools/apex/downloads/index.html
  3. Thread: help to upgrade to version 4.1 of apex
    1. In my 10g XE installation (upgrade from 4.0.2 ) I did ( in this order ):  https://forums.oracle.com/forums/thread.jspa?threadID=2273952
      install SQL> @apexins SYSAUX SYSAUX TEMP /i/
    2. upgrade img folder SQL> @apxldimg.sql c:\oraclexebe sure about correct location for image files in ..\apex\img
      old 1: create directory APEX_IMAGES as ‘&1/apex/images’
      new 1: create directory APEX_IMAGES as ‘/mnt/hgfs/vm_share/apex/images’ — make sure here is the right directory….
      https://forums.oracle.com/forums/thread.jspa?threadID=2358765if you need to change admin user password.
      SQL> @apxchpwd
      Enter a value below for the password for the Application Express ADMIN user.
    3. removing prior installationsfinding old installations
      SELECT username FROM dba_users
      WHERE (username LIKE ‘FLOWS_%’ OR USERNAME LIKE ‘APEX_%’)
      AND USERNAME NOT IN ( SELECT ‘FLOWS_FILES’ FROM DUAL
      UNION
      SELECT ‘APEX_PUBLIC_USER’ FROM DUAL
      UNION
      SELECT SCHEMA s FROM dba_registry WHERE comp_id = ‘APEX’);removing old installations. DROP USER APEX_040000 CASCADE;—-
      Notes:My installation – apex 4.1 unzipped – resides in:c:\oraclexe\apex — no empty apex folder in apex main folder

      always loggin in as :

      sqlplus system/password as sysdba

      Final Result: works fine. Elapsed: 00:39:38.50

Written by youryblog

January 22, 2013 at 1:19 AM

In memory of Lary Bernstein. “Retrospective of Lawrence Bernstein”, NJIT Software Engineering

leave a comment »

Larry died peacefully at home on Friday November 2, 2012 according to his wishes.

I knew him for several months only (maybe a half of year), last his months, but I was so glad to discuss with him about SW Engineering problems and about SW Engineering teaching at post secondary institutions.

Before his death he sent email to about 70 people with his MyRetro on 20 October 2012. He had cancer. I think I can publish his message on my blog, especially because in the past I already asked his permission to publish our discussions on my blog for my and only for my students and SW Engineers. I think he would be happy to see, that his ideas and his thoughts can be used by other SW Engineers and Computer Scientist even after his death.

May he rest in peace!
++++++++++++++++++++++++++++

Retrospective
Lawrence Bernstein
NJIT Software Engineering adjunct Professor
20 October 2012

I retired after 35 years at Bell Laboratories. I had great jobs there. Where else could a kid come out of Rensselaer Polytechnic Institute with an E.E. degree at the very beginning of the computer revolution and have the opportunity to shape that development?
The summer before my senior year at RPI, I wrote my first program. I had a summer job testing circuit boards for sonar systems. I found a computer closeted away and used it to write a program to compute settings for active sonar systems. I was so proud! It reduced two weeks of design work to 15 minutes. My colleagues were less than thrilled. They tsked over my foolishness in trusting such an unproven thing and went back to their manual design methods. Early Lesson #1: Follow your bliss.
Undeterred by the discouraging words, the following June found me graduated and working at Bell Labs, writing software for an antimissile system. My job was to understand the system design and translate it into a set of specifications for programming warhead detection software. This accidentally provided an early lesson in configuration management.
Sequentially numbered punched cards were used to create test data and, though we passed a software incentive fee test for the project, we did not get consistent results. The culprit was human error. What I thought were 40 cards cycled 1,000 times were actually only 39 cards. The computer center clerk had dropped the deck and never checked that the card order–and number of cards!–was correct before creating the driver magnetic tape. Early Lesson #2: People can confound the best laid plans.
NASA recognized the need for formal configuration management as part of the Apollo program. Their approach of manual punched card manipulation became the foundation for the development of the Source Card Control automated system that was the grandfather of today’s configuration control software tool industry.
The army customer next decided that a system to intercept many warheads in one engagement was critical. Redesign work had to include parallel processing with shared memory and up to ten processors. The National Academy of Sciences opined that such a system could never work. Early Lesson #3: Don’t be discouraged by anything you hear, but listen carefully. You might get a pearl.
My job was to design a task assignment system that spread the work load, honored precedence relations, and met the tight response time requirements of this real-time system. Our design worked, but my colleagues and I considered all suggestions and made special provision to dedicate several processors to battle planning software.
About this time, I realized the wisdom of the magic words, “What changed?” The response is usually “NOTHING,” delivered defensively. Calm insistence on reviewing the configuration often reveals a change deemed harmless to be the culprit. One time we wrote and debugged software to transmit 24 reels of tape containing radar data from a Pacific Ocean island to the east coast of the United States in 36 hours so that Lincoln Labs could determine if solar panels for the Sky Lab had properly deployed. They hadn’t. Early Lesson #4: Check and verify, again and again.
After a series of mostly successful tests on the Pacific Missile Range and the signing of the SALT II anti-missile treaty, I turned my attention to the Bell Telephone companies in the United States. Software was still the poor cousin of the industry. There was little respect for this mysterious stuff that seemed so irritable, so unknowable and yet so critical to the business. This lack of management vision caused a great many young people to be trained in software design and then let go, often to make brilliant careers elsewhere.
I became a software project manager in 1975 for a project called BISCOM that was in a shambles. When the car service drivers for the customers’ executives knew the daily status of BISCOM, things were really bad.
I had many problems to confront. I could not estimate how many people would be needed to create a given set of features or just how long it would take to produce a fully tested release. Too often I found myself explaining what went wrong rather than fixing problems. Fairly soon, though, some solid ground developed.
The relationship between developer and customer is fundamental to success, and prototyping is an excellent method for establishing mutual understanding. I found that teamwork would actually double productivity. It is difficult in the present day to remember just how difficult it was to explain and imagine methods that were essentially invisible to users who had always before had tangible “stuff” to handle.
One executive asked several accomplished software engineers to examine the project and its people. They reported, “Most of the developers have mathematics/computer science backgrounds. Although it is important to have developers with a good understanding of current software tools, it is at least equally as important to have people with sound engineering backgrounds. The mathematician and the engineer each bring to a task different and complementary viewpoints and one rarely finds individuals who can successfully represent both. Future hiring should be of candidates with engineering backgrounds until a better computer science/engineer balance is achieved.”
Was this carefully and dispassionately received by the 1975 BISCOM project management in the spirit in which it was offered? Hardly! “This recommendation is only partially accepted. Although we agree that people must grow to do software engineering and sophisticated design jobs, we do not accept the notion that an engineering background is necessary to achieve these goals.” The engineer vs. computer scientist issue caused the organization to discount the entire audit report but the notion of prototyping was quickly accepted. Early Lesson # 5: Prototype to understand the requirements.
It was the first time I encountered the Computer Science vs. Software Engineering turf battle. Unfortunately, it continues to this day with computer scientists arbitrarily calling themselves engineers, though without the necessary study for the degree.
The computer scientist is needed to produce the working software so that it performs needed features in a trustworthy manner. The software engineer packages the software into a system, solves problems, simplifies high level algorithms, estimates costs and schedules, models system reliability, models the economic value of the projected system and creates architecture and high level interface designs.
This conflict will remain unresolved until liability for failure is assigned to individuals as well as corporations.

WHAT WORKS

1. Start small, 10 to 20% of the ultimate staff, to understand the customer requirements, build a prototype and create a first order architecture. When there are more people available, assign them to learning new technology or the application domain while the high level architecture team performs exploratory development tasks.

2. Cultivate a friendly user site so that the customer’s people feel ownership in the project and help the developers understand what is really needed and meant by the abstract requirements.

3. Hire the best people you can afford, and steer the best of the best to the application. Simplify the product design and algorithms. Design simplification through reuse and redesign works well. Schedule and hold periodic project meetings with a fixed agenda.

4. Invite the customer’s people to offer solutions, but ask your own software engineers to develop THE solution.

5. Create a configuration management organization to maintain the official project libraries, do builds and track changes. This activity is variously called “software manufacturing” or “software administration.”

WHAT DOESN’T WORK
1. Metrics based on lines of code are useless and misleading. Tracking metrics for the sake of having metrics leads to a false sense of management control. For example, I found that the reported number of customer high severity problems, even when the same problem was counted multiple times because it was reported from different installations, was a far better measure than correction reports per thousand lines of source code to measure software product quality and to direct bug fixing efforts.
2. Separating responsibility, accountability, authority and control often leads to chaos and interminable committee work.
3. Taking a one-size fits all approach and expecting the same processes to fit projects of all sizes is a mistake.
4. Insisting on specific software development processes before the project is defined, staffed or organized. The process must fit the problem.
5. Not having assigned people to tool development as the best programmers will migrate from the application to making tools.

After retiring from Bell Labs, I went on to teach at Stevens Institute of Technology and created their Software Engineering Master’s Program. Then I had the opportunity to teach software engineering topics for three years at New Jersey Institute of Technology.
I had a very exciting fifty years helping the software industry grow. It is a shame that we still can not agree that there is a serious and important difference between the computer scientist and the software engineer. Both are needed and both bring special skills and knowledge to building software systems.
Too many in our profession do not read our rich literature. I expected professionals who want me to respect them as software engineers to have studied or at least be familiar with these ten publications:
1. Fred Brooks, The Mythical Man-Month, Anniversary Edition, Fred P. Brooks, Addison-Wesley, 1995, ISBN-10 0-20183595-9
2. D.L. Parnas, “On the Criteria To Be Used in Decomposing Systems into Modules,” Communications of the ACM, Vol. 15, No. 12, Dec. 1972, pp. 1053-1058.
3. B.W. Boehm, “Software Engineering,” IEEE Transactions on Computers, Vol. C-25, No. 12 Dec 1976, pp. 1226-1241 and Software Engineering Economics, Prentice-Hall, 1981, ISBN 0-13-822122-7
5. Albert Endres and Dieter Rombach, A Handbook of Software and Systems Engineering, Pearson Addison-Wesley, 2003, ISBN 0-321-15420-7.
6. Peter G. Neumann, Computer Related Risks, Addison-Wesley, 1995, ISBN 0-201-55805-X.
7. Tom DeMarco & Timothy Lister, Peopleware 2nd ed., Dorset House, 1999, ISBN 0-932633-43-9
8. Martin Fowler, Refactoring Improving the Design of Existing Code, Addison-Wesley, 1999, ISBN 0=2201-4567-2.
9. Hans Van Vliet, Software Engineering Principles and Practice 2nd edition, Wiley, 2000, ISBN 0-471-97508-7.
10. Lawrence Bernstein and C.M. Yuhas, Trustworthy Systems through Quantitative Software Engineering, Wiley, 2005, ISBN 0-471-69691-9

Written by youryblog

November 6, 2012 at 11:04 PM

IT jobs market (some info)

leave a comment »

  1. The Unemployable Programmer: “When companies find out I don’t have a degree that’s usually the end of the road. “
  2. T Job Market Recovering Faster Than After Dot-Com Bubble Burst InfoWorld (01/14/13) Ted Samson  (from the January 16, 2013 edition of ACM TechNews http://technews.acm.org/)”More new technology jobs have been created since the end of the past recession than during the same recovery period following the burst of the dot-com bubble and the early 1990s recession, according to a recent Dice.com report. In the 42 months since the most recent recession officially ended in June 2009, 180,600 tech jobs have been created. By contrast, in the 42 months following the end of the recession in March 1991, the total number of U.S. tech jobs dropped by 48,500. In addition, between November 2001 and April 2005, 415,600 tech jobs were lost. Although the past recessions were damaging to the tech industry, today tech jobs are steadily returning and the unemployment rate among tech professionals is much lower than the overall national average. At the end of 2012, the tech unemployment rate was 4.1 percent, while the national average was 8.7 percent. The unemployment rate for database administrators is 1.5 percent, the lowest among all tech-related categories. The second lowest rate is among network architects at 1.9 percent, while the rate for software developers is 2.9 percent, followed by computer systems analysts at 3.3 percent and Web developers at 3.5 percent. ” full paper is here http://www.nytimes.com/2013/01/15/technology/california-to-give-web-courses-a-big-trial.html
  3. Jon Swartz, USA TODAYShareSecond of five reports this week on the job outlook in key industries.full paper: http://www.usatoday.com/story/money/business/2012/10/01/hot-tech-jobs-demand/1593105/“Data analysts are as important as the best engineers and designers. Job recruiters would say they’re more important.A recent McKinsey Global Institute study called data analytics “the next frontier for innovation, competition and productivity.”

    “It’s never been a better time to be a data scientist,” known in the industry as quantitative jocks,says John Manoogian III, co-founder and chief technology officer at 140 Proof. “Companies want to turn this data into insights about what people like and what might be relevant to them, but they need very specialized analytical talent to do this.”

    And the job pays well — whether in San Francisco (an average annual salary of $104,000), New York ($102,000) or Chicago ($86,000), according to Indeed.com. The average salary is $74,000, says site Simply Hired.

  4. IT Jobs Light Up Top 100 Careers for 2013 by InfoWorld, December 20, 2013 http://www.acm.org/membership/careernews/archives/acm-careernews-for-tuesday-january-8-2013/
    “According to a recent U.S. News and World Report ranking of the 100 best jobs for 2013, systems analyst, database administrator, software developer, and Web developer are among the top 10 overall careers of the year. In addition, three other IT jobs — computer programmer, IT manager, and systems administrator — made the top 25. U.S. News and World Report based its rankings on several key factors: salary, job prospects, employment rate, and growth potential. Computer systems analyst was ranked fourth on the top 100 list with an overall score of 8.2 out of 10.
    According to the report, the median salary for systems analysts in 2011 was $78,770; the highest-paid 10% of systems analysts earned $120,060 and the lowest-paid took home $49,370. With a score of 8.0, database administrator was ranked the sixth best career for 2013. The median salary for DBAs was around $75,190 in 2011, with the top 10% netting $116,870 and the bottom 10% bringing home $42,360. Number seven on the list of top 100 jobs with an overall score of 7.9: software developer, a position earning a median salary of $89,280 in 2011.”
  5. The Secret to Getting Your New Job in the New Year LinkedIn Today, December 28. 2013 from http://www.acm.org/membership/careernews/archives/acm-careernews-for-tuesday-january-8-2013/ “While resumes are still essential in helping candidates get the initial interview, it’s the ability to tell a compelling story that often gets you to the next round and, eventually, a new position. According to experienced executive recruiters, the lack of a purposeful and compelling story is the number one reason why candidates fail to win over prospective employers in job interviews. This is especially true at the highest executive levels, where the ability to tell a purposeful story helps to convince hiring managers they can lead organizations, persuade customers, manage employees and sell products.”

Written by youryblog

October 3, 2012 at 1:38 PM

Tools for SW Engineering

leave a comment »

Written by youryblog

October 2, 2012 at 12:26 PM