dinsdag 12 januari 2016

testing log statements

Introduction

Testing can sometimes make your head spin. For example when a method is a void and you are not allowed to use powermock. 
The thing with a method like that, is that it is not returning an Object on which you can test on. 
If you are lucky it might have a parameter that changes during the execution of the method. 
Sometimes it has a log statement based on a Exception or on an event captured in an if statement, that happened during execution.  
That gives also opportunities. Let me show how.

The solution
 
For working with log files the best thing to do is approach them with log appenders. 
//We start with a mock appender: 
@Mockprivate Appender mockAppender;
//Creating a rootlogger to hold all the appenders. 
ch.qos.logback.classic.Logger root = 
(ch.qos.logback.classic.Logger) LoggerFactory.getLogger 
(ch.qos.logback.classic.Logger.ROOT_LOGGER_NAME);
 
 // appending the ArgumentMatcher to the mockAppender. 
Mockito.verify(mockAppender).doAppend(Mockito.argThat 
(new ArgumentMatcher<Object>() { 
//implementing the matches method to see if the
// log text appears in the log file 
@Overridepublic boolean matches(final Object argument) {
return ((LoggingEvent) argument).getFormattedMessage()
//The text we want to search. 
.contains("test logging text that should be found");}}));
 

The conclusion
Void methods could be still a call for an asperine when it comes to testing. There are some helpful methods I believe that this is one of them. I used it this time to test a method that checked if an certificate was expired. This way I could pump up the test coverage of my code.
Have fun!
 

vrijdag 18 december 2015

A bit of Reflection

Introduction

In all the years I have been programming one of the dullest things I found to do was writing and maintaining converters. I had a couple of discussion about the subject. The point that always came up as a defence to not use bean mappers like Dozer was the fact that they used Reflection under the hood. I agree up to some level that the use of Reflection in production code should not be used in production code. The interesting point is that most of the third party libraries are using Reflection under the hood. Every library that uses Annotations for example are using it. Think about Spring, all the JPA libraries, log4j, Junit etc...
So why not for something like bean mapping. So lets see if we can build one our selfs.


The solution

A bean mapper is meant to  copy values from one bean to the other. So what you need to accomplish this is to know the field names of the "other" bean.

 The solution is actually not that complicated What I did was create a couple of Annotations and a handler.

If the field names are different you can pass the name of the fields in the "other" bean by using an annotation.
The Annotations look like this:


//The rentention policy tells when this Annotation should be used.
@Retention(RetentionPolicy.RUNTIME)
//The target tells where the annation should be placed. In this case field only.
@Target(ElementType.FIELD)
public @interface Copy {
    // the string that contains the value given to the Annotation.
    public String copyTo();

}

If the field names are the same you can do it without an annotation. 
I show that later.

If you dont want the fields to be copied you can use simply another annotation that prevents the field to be part of the copy process:

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.FIELD)
public @interface IgnoreCopy {
}

The annotation in use look something like this:

// copying with a different field name
@Copy(copyTo = "testString2")
private String testString;
// copying wit the same fieldname
private String testStringNoAnnotation;
 
should not be copied.
@IgnoreCopyprivate String notToCopy;


To copy the values first of all you need the fields of the bean to copy from:
//Use the getDeclaredFields instead of the getFields. Otherwise you can't find 
//the field you are searching for. 
Field[] fields = copyFrom.getClass().getDeclaredFields();

//looping through the fields to find the annotation if needed.
for(Field field : fields) {
    //The annotation if the field names are different.
    Copy copy = field.getAnnotation(Copy.class);
    //The annotation to see if the field shouldn't be copied. 
    IgnoreCopy ignoreCopy = field.getAnnotation(IgnoreCopy.class);
    if(ignoreCopy == null) {
        if ( copy != null) {
            //the actual method to copy with different field name 
            addValueToField(copyFrom,copyTo, field, copy.copyTo());
        } else {
            //the actual method to copy with the same  field name 
            addValueToField(copyFrom, copyTo, field, field.getName());
        }
    }
}


private void addValueToField(final Object copyFrom, 
final Object copyTo, final Field field, final String name) {
    try {
        // Searching for the field in the "other" bean
        Field copyToField = copyTo.getClass().getDeclaredField(name);
        // The fields are private so we make them accessible
        copyToField.setAccessible(true);
        field.setAccessible(true);
        // The actual copying of the values
        copyToField.set(copyTo, field.get(copyFrom));
        //stop the accessibility of the fields. 
        copyToField.setAccessible(false);
        field.setAccessible(false);
    } catch (NoSuchFieldException e) {
        log.error("could not find field " + name);
        e.printStackTrace();
    } catch (IllegalAccessException e) {
        log.error("could not access field " + name);
        e.printStackTrace();
    }
}
 
 
 
// I added two more methods that are helpfull with the cherry picking. 
  
/** * checks if ignorefield is set in the target.class 
* @param copyTo the target class. 
* @param fieldName the fieldname to be set. 
* @return a boolean if the field is right to set. 
*/
 private boolean ignoreCopyToField(final Object copyTo, final String fieldName) {
    Field field = null;
    // A list of fieldnames to prevent the NoSuchFieldException
         List<String> fieldNames = checkFieldExists(copyTo);
        IgnoreCopy ignoreCopy = null;
        try {
            // check if the field exists in the target class. 
           if (fieldNames.contains(fieldName)) {
                // retrieve the field with the name 
                field = copyTo.getClass().getDeclaredField(fieldName);
                // retrieve the annotation from the field
                ignoreCopy = field.getAnnotation(IgnoreCopy.class);
            }
        } catch (NoSuchFieldException e) {
            e.printStackTrace();
        }        return  ignoreCopy != null;
}

/** * creates a list that has all the fieldnames in it. 
* @param copyTo the target class 
* @return a list that has all the fieldnames in it 
*/ 
private List<String> checkFieldExists(Object copyTo) {
    List<String> fieldNames = new ArrayList<String>();
    Field[] fields = copyTo.getClass().getDeclaredFields();
    for (Field field1: fields) {
        fieldNames.add(field1.getName());
    }
    return fieldNames;
}
 

Conclusion

The choice to be made is more of a philosophic one. Is it worth to add some more reflection to the project at hand. I think the decision is to use it where it is useful. If it can save you writing and maintaining miles of boring code my advice would be to do it. In many cases it allready happens. So think about and work this out for your self. The code to build this is short and simple.

The original source code can be downloaded  here
The jar file can ben downloaded  here
Have fun!


dinsdag 10 februari 2015

Sorting data

Introduction:

In the issue I had at hand we needed to sort the data to create a matrix. So actually we needed to morph a list to a matrix form. 

The problems:

At first hand I was thinking about using the states of the data to figure out where in the matrix it should find its place. As soon I made a drawing about my idea, I stopped working at it. I needed to create a complex structure with a lot of if statements. I understood that, if I choose this way I might run into a lot of bugs. I needed to think harder. 

The solution:

I turned the whole idea around and looked at it from the other side. Instead of creating the matrix directly from the list I could something like a pre-sort. The next question obvisiouly would be: How the hell do I do that? The answer was actually quite simple looking back at the issue. I took a LinkedhashMap and a Treemap. I made sure that the query returned everything sorted by date. 

In the LinkedHashmap I used as a key the date and the Treemap. The LinkedHashmap keeps things in order of incoming. So it kept the order of the query I created by date. The second ordering would be by currency  where the payment was in done.  This I used as a key in the TreeMap together with the rest of the object. The Treemap sorts on aliphatic or numeric order using the black and red sorting algorithm. 

Now All I had to do was write a double loop for both of the maps and put the data I needed in place. 


Conclusion:

Sometimes it is better to make that drawing and have a better look at what you need to-do. I believe that part is the greatest joy in my job. Create something that gives me the proud feeling.

Have fun!


vrijdag 30 januari 2015

saving passwords the save way

Introduction:

Every time I search on security matters, I always find the things to prevent attacks from the outside. These issues are important there is no doubt about that. But the one issue that kept running around my mind was one of the jobs where they got into your database. That is the place where an unwanted visitor should stay away. One of the last resort solutions to prevent this unwanted visitor to steal your customer data is to hash the passwords and encrypt the data in that database. The downside of this technique is that most of the encrypting and hashing methods all ready have white papers. So the only thing is: that buys you a bit more time before they got to your data.

Part of the solution:

I did not solve the whole case of steeling data yet. I did found away to store passwords strongly protected in my database. The solution is quite simple: don't store the real password in the database but an abstraction of that password. Every character has an ASCII number changing the characters in the password into numbers gives you the possibility to calculate with them like this:


double key = 0;
char[] passwordChars= password.toCharArray();
for (char keyPart : passwordChars) {
      key += Character.getNumericValue(keyPart);
}

Now you have a nummeric value of a password. How far you will calculate this is upto you.
The next question is ofcourse what to do with a number. The simple thing is that you can allready store this number into the database and you have an abstraction of the password. But the fun is to push it a bit further. You can create an hashmap that has a double as a key and a String as a value.
With the calculated number you get the string value out of the database. and store that as a password.

Something like this:
private Map<Double, String> falsePasswords = new HashMap<Double, String>();
and fill it with nice long strings with a lot of weird characters:
falsePasswords.put(1.0, "*(_)((UIiuyuUITYFTYR%R&$%&%tituyutyr867987yuyuiyo8&)*0");
I would make this a lon list of false passwords and create a calculation on the password that gives a good range of false passwords.

To make it look like a real password you can hash it and make the unwanted visitor think that he can decrypt all your passwords.

Conclusion:

The solution to most of the problems are not that hard. In this case a simple calculation and a hashmap brought the solution.

Have fun!

maandag 12 januari 2015

Spring-boot rest/jpa full blown

Introduction

For a while I was looking for a suiting solution where I could use spring rest services combined with the Spring/JPA repositories. What I actual was looking for was a query driven rest service where I did not had to convert one bean to another. This awfull boilerplate code and is a maintainance nightmare.

After a lot of searching I ran into Spring-boot where the got a couple of steps further than I was looking for. After studying it for a while I got used to their ideas and must say I like them.

While I was searching I ran into a lot of examples but none of them was showing me how to deal with spring-boot in a professional and serious mather. So i picked up a lot from the reference guide and put the pieces of the puzzle together.

The configuration
I build the whole story in Maven for the gradle fans have a look at the reference guide. There is fully explained howto do it in gradle.

The basiscs of spring-boot is the simple fact that they provide you with a super pom for maven where you can cherry pick the parts you really like to use. Here are the cherries I picked from that tree:

The project pom 
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <parent>
        <artifactId>abstractservice</artifactId>
        <groupId>com.purat.services</groupId>
        <version>1.0-SNAPSHOT</version>
    </parent>
    <modelVersion>4.0.0</modelVersion>
    <artifactId>userservice</artifactId>
    <packaging>jar</packaging>

    <properties>
        <tomcat.version>8.0.3</tcat.version>
        <!-- use UTF-8 for everything -->
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <start-class>com.purat.Application</start-class>
        <spring-boot-version>1.1.10.RELEASE</spring-boot-version>
    </properties>
// The super pom provided by spring-boot
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>1.1.10.RELEASE</version>
    </parent>

    <dependencies>
// these are the depedency to deliver all the necessary jpa libraries to approach the database. 
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jpa</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-jdbc</artifactId>
        </dependency
//This is the library to deliver all the necassary rest libraries. To create for example your own endpoints
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-rest</artifactId>
        </dependency>
//This is the library that gives us the web application part. 
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
//This is the library that gives us the production-ready parts. For more details look at boot-starter-actuator
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>
//For the database I used postgress.
        <dependency>
            <groupId>postgresql</groupId>
            <artifactId>postgresql</artifactId>
            <version>9.0-801.jdbc4</version>
        </dependency>
// For handling the getters and setters I used lombock.
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
                <version>1.12.6</version>
                <scope>compile</scope>
        </dependency>
    </dependencies>   <build>
        <pluginManagement>
        <plugins>
//This plugin is responsible for the kind of packaging you desire. You can choose  a war or a jar
This plugin will build it with the desired content.  
To deploy a war file there are still a couple of steps to take. You can find the description for that here:
traditional deployment
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.1</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
        </plugins>
        </pluginManagement>
    </build>

    <repositories>
        <repository>
            <id>spring-releases</id>
            <name>Spring Releases</name>
            <url>https://repo.spring.io/libs-release</url>
        </repository>
        <repository>
            <id>org.jboss.repository.releases</id>
            <name>JBoss Maven Release Repository</name>
            <url>https://repository.jboss.org/nexus/content/repositories/releases</url>
        </repository>
    </repositories>

    <pluginRepositories>
        <pluginRepository>
            <id>spring-releases</id>
            <name>Spring Releases</name>
            <url>https://repo.spring.io/libs-release</url>
        </pluginRepository>
    </pluginRepositories>
</project>

The Application class and application.properties

Despite of what you see on the internet I like to keep the Application class to a bare minimum. We need this baby only to start the project up after we build a war or a jar. 


import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;

/**
 * Created by compurat on 1/11/15.
 */
//Indicates that a class declares one or more @Bean methods and may be processed by the Spring container to generate bean definitions and service requests for those beans at runtime.
@Configuration
// This annotation configures  and uses the standard beans that are on the classpath of the diverse libraries.
@EnableAutoConfiguration
//Configures component scanning directives for use with @Configuration classes.
@ComponentScan
public class Application {

    public static void main(String[] args) {
//This is the only line you need to start up you application.
        ConfigurableApplicationContext context = SpringApplication.run(Application.class);
    }

}

The application.properties file under the resources folder:

The way this works, looks to me like the Wicket framework. 
//The configuration of the database
spring.datasource.url=jdbc:postgresql://localhost:5432/people
spring.datasource.username=Pieter
spring.datasource.password=Pieter01
spring.datasource.driver-class-name=org.postgresql.Driver
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
hibernate.hbm2ddl.auto=create-drop
spring.jpa.show-sql=true
// Setting the end point properties give it an id,give the possibility to shut the endpoints down and sensitive is security wise. Does an endpoint need username and password to be approached.
endpoints.beans.id=springbeans
endpoints.beans.sensitive=false
endpoints.shutdown.enabled=true
//Tomcat serverlog enabling and patterns.
server.tomcat.access-log-enabled=true
server.tomcat.access-log-pattern=%a asdasd
The log file propeties.
logging.file= logging/userservice.log
logging.level.org.springframework.web: DEBUG
logging.level.org.hibernate: ERROR

Basicly we now have ready to run spring-boot webservice with JPA. It does not work yet we still need to fill in the missing pieces here. 

The application
Lets start with the rest endpoint. That is where it all begins. This is the profile endpoint and does not take any parameters in this case. But it give you an idea how an endpoint in spring-boot works:

import com.purat.data.People;
import com.purat.data.repository.PeopleRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

/**
 * Created by compurat on 1/11/15.
 */
//Tells spring that this is obvious a rest controller
@RestController
//Tells spring what the base url is.
@RequestMapping("peopleservice")
public class ProfileEndpoint {
    @Autowired
//spring/jpa repository to approsch the database
    private PeopleRepository peopleRepository;
//Tells spring what the endpoint url is.
    @RequestMapping("/profile")
    public Person profile() {
   The jpa entity
    Person person 
//The query parameter names are in the repository method name. in this case EmailadressAndPassword
= peopleRepository.findByEmailadressAndPassword("pieter.roskam@gmail.com","Roskam01");
If you Return the entity in the rest controller spring will underwater serialize the object to an json file.
        return person;
    }

As second part I like to handle the jpa entity:
import lombok.Getter;
import lombok.Setter;

import javax.persistence.Entity;
import javax.persistence.Id;

/**
 * Created by compurat on 1/11/15.
 */
//Lombock getters and setters keep you code nice and clean.
@Getter
@Setter
//What table to approach
@Entity(name="people")
public class Person {
// the unique id and all the fields as named in the table.
    @Id
    private long id;
    private String firstname;
    private String lastname;
    private String address;
    private String housenumber;
    private String postalcode;
    private String city;
    private String telephone;
    private String verification;
    private String emailadress;
    private String password;

}

As third and last part I would like to show the jpa repository:
package com.purat.data.repository;

import com.purat.data.User;
import org.springframework.data.repository.CrudRepository;

 
The spring/jpa repository gives some benefits. For simple queries you can use the parameter names in the method, but for more complicated queries you can also use the @Query annotation to write a jql query .
public interface PeopleRepository extends CrudRepository<Person,Long> {

    User findByEmailadressAndPassword(String emailadress, String password);
}

Conclusion

Spring-boot took a complete new road to developing and deploying rest services. The most interesting part is probably that you can have a rest service in a jar file. That was a mind boggle to me for a while. The secret lies in the fact that it is a self executing jar. While starting up your Application class it will also startup tomcat if you configured it right (dependencies and properties file).

The second part that might be of interest, is the fact that it looks like that the separation of tiers is gone. But if you look closely it is not:

1. The separation of the database is still intact because it is hidden behind the spring/jpa repositories. They are not reachable from the outside. 

2. The entity bean seams to be exposed at the frontend but again that is not completely true. The bean will be serialized to a json file by Spring. That is the only visible thing on the outside. The only one that should reach this service is the frontend. Which is also separate.

I love the part that you can package a rest service into a jar. You put it somewhere on your server and start it with java -jar servicewhatever.jar. That will fire up everything you build for this rest service.


have fun!  

zondag 26 oktober 2014

Execution Framework introduction

Introduction:

In the execution service there are many ways to do a trick I have been googling around and found a lot of words but none of them explains well what is what and why.

Solvation:

Most of the executor service are obtained through the Executors class like:
 Executors.newCachedThreadPool(); 
This threadpool provides you with the number of threads that you need. It reuses threads that come available again.
Executors.newFixedThreadPool(10); 
This threadpool only provides you with the number of threads that you have declared 10 in my example.
Executors.newSingleThreadExecutor(); 
This threadpool contains one single thread and all given tasks will be executed in sequence.

Except ofr the SingleThreadExecutor() all of the others deal the same way if it becomes to lists. As soon you expose a list to one of the named threadpools the behaviour is as follows:

Lets say we take a CachedThreadPool or a FixedThreadPool we expose a list of elements to the executorservice initialized with one of the above named pools.  The first element of that is exposed for threading it will be spread over all the available threads. As soon the task is done, the next element of the list will use all of the threads.  This is a good thing if that is what you desire. But what if you want to share all the elements of the list shared over different threads. In other words every element of the list should be worked on at the same moment in parallel.

This is where the only pool comes in that can do that is the ForkJoinPool (since java 7).  This pool is not obtained through the Executors class but is instanciated on its own like :

ForkJoinPool forkJoinPool = new ForkJoinPool(3); The constructor takes a int as argument to initialize the number of threads, 3 in my example. The ForkJoinPool uses the  work-stealing philosophy. This means that if a thread is running out of jobs, it will visit one of the neighbor threads and see if he can help him out.

Conclusion:

If you want to work a single task in different threads the ExecutionService is what you need. If you need different tasks work in paralel the same time, you should use the ForkJoinPool.

If you are using a version of Java lesser then 7, you can try this trick with google guava. I worked at this time with version 18.0 and that worked fine. Here is a piece of example code:

//Java class
ExecutorService executorService = Executors.newCachedThreadPool();
//Guava class
ExecutionList executionList = new ExecutionList();
//code to execute
executionList.add( new myRunnable(), executorService);
executionList.execute();

Have fun!

zaterdag 11 oktober 2014

reading method from stack


Introduction:

One of my colleagues came up with the idea to setup logging through AOP. I always encourage ideas like that. This might set you on the wrong foot. This blog is not about how to deal with AOP. This little part is about how to read the method name from the stack so it can be written to a log.

The difficulty came that certain classes should log the method name with a certain pattern and other method names not. This made it hard to solve with the patterns available in the Loggger settings.

The solution:

The idea sounds simple. I worked it a bit more out to be comply  to the open closed design way. Because what I learned in composite designing is that every object should interfere as less as possible with other objects. In other words, It should be as independent as possible. The only way to achieve this is to use reflection.

There we open another discussion when and where should we use reflection. The fist rule I is simple If there is another solution that is as valid use that one. If there is not another valid option then use reflection. So why reflection in this matter?

There are two ways to solve this:


  1. hardcode every method name also as a constant in the class that needs to log the method name.
  2. Use Reflection to get the name of the stack.   
So this how we solved it:

String methodname = Thread.currentThread().getStackTrace()[1].getMethodName();
Logger.info(methodName);


The .currentThread() is a static method from Thread from which you can get the stack called in that Thread. In my example there was only one thread. 
The 1 refers to the position of the method you desire from the stack. The further back you want the higher this number becomes. The only bummer in this story is that you cannot obtain the names that have been given to the parameters of the required method. The jvm creates new instances of them and name them  like arg0, arg1 etc...

Conclusion:

Knowing what you are doing is very important. It also shows everytime that things in Java are not that hard ones you understand the concepts. This is something you probably not gonne use a lot but when you need it, it is handy.

Have fun!