improving your cfml code quality

Post on 21-Jan-2018

253 Views

Category:

Software

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

(TOOLS FOR) IMPROVING YOUR CFML CODE QUALITY

KAI KOENIG (@AGENTK)

AGENDA

▸ Software and code quality

▸ Metrics and measuring

▸ Tooling and analysis

▸ CFLint

SOFTWARE AND CODE QUALITY

https://www.flickr.com/photos/jakecaptive/47697477

THE ART OF CHICKEN SEXING

bsmalley @ commons.wikipedia.org

AND NOW…QUALITY

CONDITION OF EXCELLENCE IMPLYING FINE QUALITY

AS DISTINCT FROM BAD QUALITYhttps://www.flickr.com/photos/serdal/14863608800/

SOFTWARE AND CODE QUALITY

TYPES OF QUALITY

▸ Quality can be objective or subjective

▸ Subjective quality: dependent on personal experience to recognise excellence. Subjective quality is ‘universally true’ from the observer’s point of view.

▸ Objective quality: measure ‘genius’, quantify and repeat -> Feedback loop

SOFTWARE AND CODE QUALITY

CAN RECOGNISING QUALITY BE LEARNED?

▸ Chicken sexing seems to be something industry professionals lack objective criteria for

▸ Does chicken sexing as process of quality determination lead to subjective or objective quality?

▸ What about code and software?

▸ How can we improve in determining objective quality?

“ANY FOOL CAN WRITE CODE THAT A COMPUTER CAN UNDERSTAND. GOOD PROGRAMMERS WRITE CODE THAT HUMANS CAN UNDERSTAND.”

Martin Fowler

SOFTWARE AND CODE QUALITY

METRICS AND MEASURING

http://commadot.com/wtf-per-minute/

STANDARD OF MEASUREMENT

https://www.flickr.com/photos/christinawelsh/5569561425/

METRICS AND MEASURING

DIFFERENT TYPES OF METRICS

▸ There are various categories to measure software quality in:

▸ Completeness

▸ Performance

▸ Aesthetics

▸ Maintainability and Support

▸ Usability

▸ Architecture

METRICS AND MEASURING

COMPLETENESS

▸ Fit for purpose

▸ Code fulfils requirements: use cases, specs etc.

▸ All tests pass

▸ Tests cover all/most of the code execution

▸ Security

https://www.flickr.com/photos/chrispiascik/4792101589/

METRICS AND MEASURING

PERFORMANCE

▸ Artefact size and efficiency

▸ System resources

▸ Behaviour under load

▸ Capacity limitations

https://www.flickr.com/photos/dodgechallenger1/2246952682/

METRICS AND MEASURING

AESTHETICS

▸ Readability of code

▸ Matches agreed coding style guides

▸ Organisation of code in a class/module/component etc.

https://www.flickr.com/photos/nelljd/25157456300/

METRICS AND MEASURING

MAINTAINABILITY / SUPPORT

▸ Future maintenance of the code

▸ Documentation

▸ Stability/Lifespan

▸ Scalability

https://www.flickr.com/photos/dugspr/512883136/

METRICS AND MEASURING

USABILITY

▸ Positive user experience

▸ Positive reception

▸ UI leveraging best practices

▸ Support for impaired users

https://www.flickr.com/photos/baldiri/5734993652/

METRICS AND MEASURING

ARCHITECTURE

▸ System complexity

▸ Module cohesion

▸ Module dependency

https://www.flickr.com/photos/mini_malist/14416440852/

WHY BOTHER WITH MEASURING QUALITY?

https://www.flickr.com/photos/magro-family/4601000979/

“YOU CAN'T CONTROL WHAT YOU CAN'T MEASURE.”

Tom DeMarco

METRICS AND MEASURING

METRICS AND MEASURING

WHY WOULD WE WANT TO MEASURE ELEMENTS OF QUALITY?

▸ It’s impossible to add quality later, start early to:

▸ identify potential technical debt

▸ find and fix bugs early in the development work

▸ track your test coverage.

METRICS AND MEASURING

COST OF FIXING ISSUES

▸ Rule of thumb: The later you find a problem in your software the more effort, time and money is involved in fixing it.

▸ Note: There has NEVER been any scientific study into what the appropriate ratios are - it’s all anecdotal/made up numbers… the zones of unscientific fluffiness!

METRICS AND MEASURING

HOW CAN WE MEASURE?

▸ Automated vs. manual

▸ Tools vs. humans

▸ Precise numeric values vs. ‘gut feeling’

… but what about those ‘code smells’?

METRICS AND MEASURING

WHAT CAN WE MEASURE?

▸ Certain metric categories lend themselves to being taken at design/code/architecture level

▸ Others might have to be dealt with on others levels, e.g. acceptance criteria, ’fit for purpose’, user happiness, etc.

METRICS AND MEASURING

COMPLETENESS

▸ Fit for purpose — Stakeholders/customers/users

▸ Code fulfils requirements: use cases, specs etc — BDD (to some level)

▸ All tests pass — TDD/BDD/UI tests

▸ Tests cover all/most of the code execution? — Code Coverage tools

▸ Security — Code security scanners

METRICS AND MEASURING

PERFORMANCE

▸ Artefact size and efficiency — Deployment size

▸ System resources — Load testing/System monitoring

▸ Behaviour under load — Load testing/System monitoring

▸ Capacity limitations — Load testing/System monitoring

METRICS AND MEASURING

AESTHETICS

▸ Readability of code — Code style checkers (to some level) & Human review

▸ Matches agreed coding style guides — Code style checkers

▸ Organisation of code in a class|module|component etc. — Architecture checks & Human review

METRICS AND MEASURING

MAINTAINABILITY/SUPPORT

▸ Future maintenance of the code — Code style checkers & Human review

▸ Documentation — Documentation tools

▸ Stability/Lifespan — System monitoring

▸ Scalability — System monitoring/Architecture checks

METRICS AND MEASURING

USABILITY

▸ Positive user experience — UI/AB tests & Human review

▸ Positive reception — Stakeholders/customers/users

▸ UI leveraging best practices — UI/AB tests & Human review

▸ Support for impaired users — a11y checker & UI/AB tests & Human review

METRICS AND MEASURING

ARCHITECTURE

▸ System complexity — Code style & Architecture checks

▸ Module cohesion — Code style & Architecture checks

▸ Module dependency — Code style & Architecture checks

METRICS AND MEASURING

LINES OF CODE

▸ LOC: lines of code

▸ CLOC: commented lines of code

▸ NCLOC: not commented lines of code

▸ LLOC: logic lines of code

LOC = CLOC + NCLOCLLOC <= NCLOC

METRICS AND MEASURING

COMPLEXITY

▸ McCabe (cyclomatic) counts number of decision points in a function: if/else, switch/case, loops, etc.

▸ low: 1-4, normal: 5-7, high: 8-10, very high: 11+

▸ nPath tracks number of unique execution paths through a function

▸ values of 150+ are usually considered too high

▸ McCabe usually much small value than nPath

▸ Halstead metrics, lean into maintainability index metric - quite involved calculation

METRICS AND MEASURING

COMPEXITY

▸ McCabe complexity is 4

▸ nPath complexity is 8

METRICS AND MEASURING

MORE REFERENCE VALUES

Java Low Normal High Very High

CYCLO/LOC 0.15 0.2 0.25 0.35

LOC/method 7 10 13 20

NOM/class 4 7 10 15

METRICS AND MEASURING

MORE REFERENCE VALUES

C++ Low Normal High Very High

CYCLO/LOC 0.2 0.25 0.30 0.45

LOC/method 5 10 16 25

NOM/class 4 9 15 23

TOOLING AND ANALYSIS

https://www.flickr.com/photos/gasi/374913782

“THE PROBLEM WITH ‘QUICK AND DIRTY’ FIXES IS THAT THE DIRTY STAYS AROUND FOREVER WHILE THE QUICK HAS BEEN FORGOTTEN”

Common wisdom among software developers

TOOLING AND ANALYSIS

TOOLING AND ANALYSIS

TOOLING

▸ Testing: TDD/BDD/Spec tests, UI tests, user tests, load tests

▸ System management & monitoring

▸ Security: Intrusion detection, penetration testing, code scanner

▸ Code and architecture reviews and style checkers

TOOLING AND ANALYSIS

CODE ANALYSIS

▸ Static analysis: checks code that is not currently being executed

▸ Linter, syntax checking, style checker, architecture tools

▸ Dynamic/runtime analysis: checks code while being executed

▸ Code coverage, system monitoring

Test tools can fall into either category

TOOLING AND ANALYSIS

TOOLS FOR STATIC ANALYSIS

▸ CFLint: Linter, checking code by going through a set of rules

▸ CFML Complexity Metric Tool: McCabe index

TOOLING AND ANALYSIS

TOOLS FOR DYNAMIC ANALYSIS

▸ Rancho: Code coverage from Kunal Saini

▸ CF Metrics: Code coverage and statistics

STATIC CODE ANALYSIS FOR CFML

STATIC CODE ANALYSIS FOR CFML

A STATIC CODE ANALYSER FOR CFML

▸ Started by Ryan Eberly

▸ Sitting on top of Denny Valiant's CFParser project

▸ Mission statement:

▸ ‘Provide a robust, configurable and extendable linter for CFML’

▸ Currently works with ACF and Lucee, main line of support is for ACF though

▸ Team of 4-5 regular contributors

STATIC CODE ANALYSIS FOR CFML

CFLINT

▸ Written in Java, requires Java 8+ to compile and run

▸ Unit tests can be contributed/executed without Java knowledge

▸ CFLint depends on CFParser to grok the code to analyse

▸ Various tooling/integration through 3rd party plugins

▸ Source is on Github

▸ Built with Gradle, distributed via Maven

DEMO TIME - USING CFLINT

STATIC CODE ANALYSIS FOR CFML

LINTING (I)

▸ CFLint traverses the source tree depth first:

▸ Component → Function → Statement → Expression → Identifier

▸ CFLint maintains its own scope during listing:

▸ Curent directory/filename

▸ Current component

▸ Current function

▸ Variables that are declared/attached to the scope

STATIC CODE ANALYSIS FOR CFML

LINTING (II)

▸ The scope is called the CFLint Context

▸ Provided to linting plugins

▸ Plugins do the actual work and feed reporting information back to CFLint based on information in the Context and the respective plugin

▸ TLDR: plugins ~ liniting rules

STATIC CODE ANALYSIS FOR CFML

CFPARSER

▸ CFParser parses CFML code using two different approaches:

▸ CFML Tags: Jericho HTMLParser

▸ CFScript: ANTLR 4 grammar

▸ Output: AST (abstract syntax tree) of the CFML code

▸ CFLint builds usually rely on a certain CFParser release

▸ CFML expressions, statements and tags end up in CFLint being represented as Java classes: CFStatement, CFExpression etc.

STATIC CODE ANALYSIS FOR CFML

REPORTING

▸ Currently four output formats:

▸ Text-based for Human consumption

▸ JSON object

▸ CFLint XML

▸ FindBugs XML

STATIC CODE ANALYSIS FOR CFML

TOOLING

▸ Various IDE and CI server integrations

▸ 3rd party projects: SublimeLinter (Sublime Text 3), ACF Builder extension, AtomLinter (Atom), Visual Studio Code

▸ IntelliJ IDEA coming later this year or early 2018 — from me

▸ Jenkins plugin

▸ TeamCity (via Findbugs XML reporting)

▸ SonarQube

▸ NPM wrapper

STATIC CODE ANALYSIS FOR CFML

CONTRIBUTING

▸ Use CFLint with your code and provide feedback

▸ Talk to us and say hello!

▸ Provide test cases in CFML for issues you find

▸ Work on some documentation improvements

▸ Fix small and beginner-friendly CFLint tickets in Java code

▸ Become part of the regular dev team! :-)

STATIC CODE ANALYSIS FOR CFML

ROADMAP

▸ 1.0.1 — March 2017; first release after 2 years of betas :)

▸ 1.1 — June 2017; internal release

▸ 1.2.0-3 — August 2017

▸ Documentation/output work

▸ Internal changes to statistics tracking

▸ 1.3 — In progress; parsing/linting improvements, CommandBox

STATIC CODE ANALYSIS FOR CFML

ROADMAP

▸ 2.0 — 2018

▸ Complete rewrite of output and reporting

▸ Complete rewrite and clean up of configuration

▸ Performance improvements (parallelising linting)

▸ API for tooling

▸ Code metrics

STATIC CODE ANALYSIS FOR CFML

ROADMAP

▸ 3.0 — ???

▸ Support for rules in CFML

▸ Abstract internal DOM

▸ New rules based on the DOM implementation

FINAL THOUGHTS

RESOURCES

▸ CFLint: https://github.com/cflint/CFLint

▸ CFML Complexity Metric Tool: https://github.com/NathanStrutz/CFML-Complexity-Metric-Tool

▸ Rancho: http://kunalsaini.blogspot.co.nz/2012/05/rancho-code-coverage-tool-for.html

▸ CF Metrics: https://github.com/kacperus/cf-metrics

FINAL THOUGHTS

GET IN TOUCH

Kai Koenig

Email: kai@ventego-creative.co.nz

Twitter: @AgentK

Telegram: @kaikoenig

top related