Data Access With Spring

Spring provides many supportive tools for data management. For example, when manipulating databases you might want to rely on transaction management.

When you annotate a method with @Transactional a transaction must be present before method-execution. To match this requirement, Spring proxies your class with a transaction interceptor that uses a transaction manager to administer the transaction.

Transactions are put in thread-local storage of the transaction manager. Other resources like the JdbcTemplate then finds it automatically if needed.

See separate post on transaction management with Spring for more details.

Spring throws runtime (unchecked) exceptions to be independant of checked exceptions thrown by infrastructure implementations.

For example sql-error-codes are defined in the sql-error-codes.xml in spring-jdbc.jar covering all codes for different databases.

Caching support

Spring caching support using maps needs to be enabled.

On the configuration:
@EnableCaching
or in XML:
<cache:annotation-driven />

Then, specify a cache manager bean. You can write your own, or use an existing one. For example SimpleCacheManager where you define the caches and their names, which you reference in the value-attribute of the @Cacheable annotation.
Third-party cache managers can also be used, e.g. EhCacheManager, retrieved from a EhCacheManagerFactoryBean and wrapped in a <code>EhCacheCacheManager or Gemfire a “distributed, shared nothing data-grid” that offers cache replication across multiple nodes.

Caching values

@Cacheable(value="...", key="SpEl", condition="SpEl")
can be used for method-calls that will always return the same result for the same arguments. A unique key can be defined using SpEl on the arguments.
They key then is used for the caching map. The maps’ name has also to be defined, as multiple caches can be used. If only certain results shall be cached, the condition
defines the rule whether or not to.

@CacheEvict(value="...") clears the referenced cache before method-invocation.

JDBC

For several use cases, Spring provides templates, one of them is JDBC. Templates prevent code redundancy, they are configurable, let you work with the result (provide a callback object with your own code) and provide exception handling.

The JdbcTemplate is thread safe, it acquires the connection, participates in the transaction, executes the statement, processes the result set and releases the connection – all this stuff you do not have to worry about. You need one template per database, as the data source is a constructor argument.

Querying

The query string may contain “?” as placeholder(s) for arguments provided with the query-method.

Simple types can be retrieved by using
jdbcTemplate.queryforObject([sql-statement], [arguments, if any], [Type].class);

A single row is queried using queryForMap(...) while for multiple rows queryForList(...) is used, where every entry represents a row in form of a Map<String, Object>.

The maps’ key being the column name, and its value the database’ value

Object mapping

  • RowMapper interface for mapping every single row to a domain object. It does not matter, if the queries intention is to return a single row or multiple rows
    With Java 8, RowMapper can be replaced by a Lambda.
  • RowCallbackHandler if no object shall be returned, meaning you process the resultsets’ rows by implementing the processRow method but do not return anything.
  • ResultSetExtractor receives the whole result-set for you to iterate over and mapping multiple rows to a single object. For these two is a shorter Java 8 notation using Lambdas available as well.

jdbcTemplate.update([sql-statement], args...) processes both insert- and update-statements.

Spring AOP

Aspect-orientated programming is used to address cross-cutting concerns that lead to code scattering (boiler plate code by spreading the same concern across your application) and/or code tangling (neglection of single responsibility by coupling different concers). Examples for cross-cutting concerns are generic functionalities, such as logging or security checks before method execution.

Spring offers of AspectJ or Spring AOP. For the latter, normal Java-Code is written AspectJ on the other is more powerful (for example, aspects can only be woven around visible methods of Spring beans with Spring AOP). AspectJ uses byte code modification to weave in the aspects, Spring AOP relies on dynamic proxies.

Because of the proxy usage a method call from within the same class/interface will NOT trigger the advice.

AOP uses the concepts of

  • JoinPoint: What is affected: The place in your program where the concern will be applied to, e.g. the method call
  • Pointcut: Where is it applied: Expression to match application code for JoinPoints
  • Advice: What is the aspect’s concern: Code that is executed at a JoinPoint
  • AdviceType: When is the advice applied, @Before, @AfterThrowing, @Around, @AfterReturning, @After etc.
  • Aspect: Component to encapsulate pointcuts and advice

Implementing an aspect

To use AOP you need to have an aspect and use it as a bean. Here’s an example with a separate configuration:

Example aspect:

@Aspect
@Component
public class GenericLogger {
 
    private static final Logger LOGGER = LoggerFactory.getLogger(GenericLogger.class);
    
    @Before("execution(* add*(..))")
    public void debugAddCall(JoinPoint joinPoint) {
        String joinPointName = joinPoint.getSignature().getName();
        Object joinPointArg0 = joinPoint.getArgs()[0];
        String targetType = joinPoint.getTarget().getClass().getSimpleName();    //not very practiable here, just to show how to access the target
        LOGGER.debug("Method {} about to be called on type {} with argument {}", joinPointName, targetType, joinPointArg0);
    }
}

Aspect configuration:

@ComponentScan("ch.pma.useradmin.logging") //package where to find the aspect-bean(s)
@EnableAspectJAutoProxy
public class LoggingConfiguration {
 
}

You then only need to the configuration into your main-configuration-class:

@Import(LoggingConfiguration.class)

The same in XML:

<aop:aspectj-autoproxy />
<context:component-scan base-package="..."/>

Exceptions

If you use @Before and the advice itself throws an exception, the target will not be called.

@After will be called regardless of whether the target threw an exception or not.

@AfterReturning, @AfterThrowing and @Around

If you use @AfterReturning, you can access the returned value by defining it in the annotation and as an argument:

@AfterReturning(value="execution(...)", returning="returnedObject")
public void debugReturnedObject(ReturnedObjectType returnedObject) {
    ...
}

With @AfterThrowing, the case for a thrown exception is similar:

@AfterThrowing(value="execution(...)", throwing="exception")
public void logException(ExceptionType exception) {
    ...
}

With the @Around advice, you pass a

ProceedingJoinPoint

as an argument to your advice on which you call

.proceed()

to call the method (or decide to skip it by not calling proceed()).

Pointcut expressions

An expression consists of a designator (normally “execution”) and a combination of annotation, return type, package, type, method and params to match.

You can use wildcards such as “*”, “**”, or “..”. Operators such as “||”, “&&” and “!” are supported as well.

Pointcut expressions might also match annotations:

@After("execution(@ch.pma.useradmin.annotation.ServiceMethod * *(..))")
public void debugServiceMethodCall() {
    LOGGER.debug("Service-method was called");
}

matches any method annotated with

@ServiceMethod

Alternatively, there is a designator for annotations:

@Before("execution(...) &amp;&amp; @annotation(serviceMethod)")
public void doSomething(ServiceMethod serviceMethod) {
    ...
}

The expression can also provide typesafe access to target, arguments and/or the proxy-object.

All the types do have to match or the advice will be skipped:

@Before("execution(void *.RepositoryService.*(java.util.Map)) &amp;&amp; target(instance) &amp;&amp; args(map) &amp;&amp; this(proxy)")
public void doSomething(RepositoryService instance, Map map, RepositoryService proxy)(
    ...
}

Remember that for the proxy, an interface will be implemented or a class extended, therefore it is possible to narrow the execution via definition of the type to be proxied.

@Pointcut

With the

@Pointcut

annotation you can break complex expressions into separate ones and reference them in the advices’ expression:

@Pointcut("execution(* package1.*.*(..))")
public void pointcut1() {}
 
@Pointcut("execution(* package2.*.*(..))")
public void pointcut2() {}
 
@Before("pointcut1() || pointcut2()")    //you could even fully-qualify your pointcut(s) here
public void doSomething(JoinPoint joinPoint) {
        LOGGER.debug("Method to be called on {}", joinPoint.getSignature().getName());
}

The expression can also provide typesafe access to target and arguments. If the types do not match, the advice will be skipped:

@Before("execution(void *.RepositoryService.*(java.util.Map)) &amp;&amp; target(instance) &amp;&amp; args(map) &amp;&amp; this(proxy)")
    public void doSomething(RepositoryService instance, Map map, RepositoryService proxy)(
        ...
    }

Integration Tests With Spring

Unit tests of your project created with Spring, you should not depend on Spring, as unit tests must not depend on external dependencies and Spring is an external dependency. These dependencies should be stubbed or mocked instead.

However, Spring provides support for integration testing with configurations, profiles and databases. These support-classes are located in spring-test.jar and are used in combination with JUnit.

Setup

@RunWith(SpringJUnit4ClassRunner.class)

annotation on your testclass creates an application context which is shared for all test methods.

If a test method modifies beans in the application context it can be annotated with

@DirtiesContext

to clean the context after the test execution.

Configurations

To reference a configuration for the tests, the testclass can be annotated with

@ConxtextConfiguration(classes=TestConfig.class)

With that in place, you can inject the bean to be tested into your test class using

@Autowired

Example

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes=TestConfig.class)
public class MyServiceTest {
 
    @Autowired
    private MyService myService; //bean to be tested
 
    @Test
    public void testMyService() {
        myService.doSomething();   
    }
}

@ContextConfiguration could also be supplied with a string array to reference XML configurations. Without any argument, it defaults to

[classname]-context.xml

in the same package.

As a third option, you can add a test-specific configuration directly in the testclass by defining an static inner class in the test class and annotate it with

@Configuration

Profiles

Testing with profiles can be done by annotating the test class with

@ActiveProfiles({"profile_1","profile_2"})

Beans associated to one of the defined profiles and those not associated to any profile,
will be loaded automatically.

Testing with databases

When you run tests against an in-memory-database you can use

@Sql

ensure its proper state or to insert some testrecords.

@Sql can be supplied with string arguments referencing sql-scripts.

If the annotation decorates a class, the script(s) run before every test method but it can also decorate a single test method.

Also, multiple @Sql annotations can be used and there is a possibility, to set an

executionPhase

e.g. to run a specific script as cleanup after a certain method.

If no string argument is defined with the annotation, the script will be referenced as

[classname].[methodname].sql

Defining

config=@SqlConfig(...)

as argument of @Sql gives you a whole lot of other options to control the scripts, e.g. if the test should fail, if the execution of the script fails.

In-memory databases

Create a DataSource bean of an

EmbeddedDatabaseBuilder

where you define a name, a type (HSQL, H2 or Derby) and can add several scripts with

.addScript("classpath:testdb-schema.sql")

to set it up. Finally, call

.build();

The XML-equivalent is

<jdbc:embedded-database id="..." type="...">
    <jdbc:script location="..." />
</jdbc:embedded-database>

Example

Testclass:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes={UseradminConfiguration.class, TestRepositoryConfiguration.class})
public class UserServiceTest {
 
    @Autowired
    private UserService userService;
    
    @Test
    @Sql(scripts="classpath:testdata.sql", executionPhase=ExecutionPhase.BEFORE_TEST_METHOD)
    public void testUserRetrieval() {
        User testUser = userService.getUserById(1);
        assertTrue("test".equals(testUser.getFirstName()) && "user".equals(testUser.getLastName()));
    }
}

Testconfiguration:

(Depending on the implementation of service and repository, the only bean needed could be the data-source).

@Configuration
public class TestRepositoryConfiguration {
 
    @Bean(name = "dataSource")
    public DataSource dataSource() {
 
        EmbeddedDatabaseBuilder dataSource = new EmbeddedDatabaseBuilder();
        dataSource.setType(EmbeddedDatabaseType.H2);
        dataSource.addScript("classpath:schema.sql");
        return dataSource.build();
    }
    
    @Bean
    public LocalContainerEntityManagerFactoryBean entityManagerFactory() throws ClassNotFoundException, PropertyVetoException {
        LocalContainerEntityManagerFactoryBean emf = new LocalContainerEntityManagerFactoryBean();
        emf.setDataSource(dataSource());
        emf.setJpaProperties(new Properties());
        emf.setJpaVendorAdapter(jpaAdapter());
        emf.setPackagesToScan("ch.pma.useradmin.entities");
        return emf;
    }
 
    private JpaVendorAdapter jpaAdapter() {
        return new HibernateJpaVendorAdapter();
    }
    
    @Bean
    public JpaTransactionManager transactionManager() {
        return new JpaTransactionManager();
    }
}

schema.sql

DROP TABLE IF EXISTS USER;
CREATE TABLE USER (id INTEGER IDENTITY PRIMARY KEY, first_name VARCHAR(50), last_name VARCHAR(50) NOT NULL, STATUS INT(11), ROLE VARCHAR(50));

testdata.sql

INSERT INTO USER(first_name, last_name, STATUS, ROLE) VALUES('test', 'user', 1, 'admin');

Using an existing test database

You need a

DataSourceInitalizer

where you set the dataSource pointing to the database and a

DatabasePopulator

(e.g. ResourceDatabasePopulator) which you provide with the paths to the initialization scripts.
The same in XML is more compact:

<jdbc:initialize-database data-source="...">
    <jdbc:script location="..." />
<jdbc:initialize>

Spring Basics

Disclaimer

You are not reading a complete reference, the posts are more about a simple understanding and getting something to work with.

Most examples use Java config rather than XML or annotation-based configuration.

What is Spring

Spring is

  • a Framework
  • a Container
  • open source

The framework consists of many subprojects to simplify working with lower-level technologies. Get an overview here: http://spring.io/projects

The projects to choose from may be overwhelming in the beginning, but simply remember this: Whenever you have a (Java-)programming problem of which you may think that somebody could have solved it already, search for it in the Spring-world and safe quite some time maybe.

Source code is available here: https://github.com/spring-projects/spring-framework

Binaries are available here: http://mvnrepository.com/artifact/org.springframework

As a container, it servers as a lifecycle manager by

  • instantiating your application objects (beans)
  • injecting their dependencies, so you do not have to care about them finding and connecting to each other

Configuration

Spring is configurable and focuses on programming against interfaces. Therefore, the complexity of the implementation can be concealed and implementations can be easily swapped out, e.g. for testing. Your objects managed by Spring are so called Spring-Beans.

The central part of your spring-application, the configuration, can be set up in different ways:

  • Using XML, the traditional way
  • Annotation based (required for Spring MVC)
  • by Java config, meaning the configuration is also a bean derived from a Java-class

Comparison between Java config and annotations

Java config pros:
  • Is centralized in one (or a few) places
  • Strong type checking enforced by compiler and IDE
  • Can be used for all classes
Java config cons:
  • More verbose than annotations
Annotation pros:
  • Frequently changes of beans made easy
  • the class as single place to edit
  • rapid development
Annotation cons:
  • Configuration spread across your classes (more maintenance/debugging)
  • Only applicable for your own code
  • Bad separation of concerns as configuration and code are merged

The configuration contains instructions on how Spring sets up its application-context which creates your fully configured application system.

Java config example

@Configuration
@ComponentScan("ch.pma.myapp")
public class MyConfiguration {
 
    @Bean(name="myService")
    public MyService getMyService() {        
        return new MyServiceImpl();
    }
 
    @Bean(name="myRepository")
    public MyRepository getMyRepository() {       
        return new MyRepositoryImpl();
    }
}

@ComponentScan allows you to use annotation-based configuration on classes, effectively combining the two methods of Java config and annotation-based configuration. For example, we can define a controller-bean using the @Controller-annotation, turning it into a bean available in the application context:

@Controller
public class MyController {
 
    private MyService myService;
 
    @Autowired
    public void setMyService(MyService myService) {       
        this.myService = myService;
    }
 
    public User getUserById(int id) {
        return myService.getUserById(id);
    }
}

@Controller, normally used in a web-environment for example in combination with @RequestMapping(s), is a sub-annotation of

@Component

You could also use one of the other stereotype sub-annotations

@Repository (e.g. for exception-translation)
or 
@Service

for the other beans, the effect in this example is the same. All beans, whether they are defined in an XML, based on annotated classes and picked up by the component-scan or created in the configuration, will be fully initialized available in the application-context.

You can set a bean-name with the annotation, if not set, the name will be the class-name in camelCase.

When you enable the component-scan, be sure to define base-package(s) as your whole classpath will be scanned otherwise.

Bean uniqueness

When you annotate two classes of the same type as beans, you need to provide an ID, which you can refer to via @Qualifier to wire the correct bean:

@Component("prodRepository")
public class JdbcRepository implements MyRepository {}
@Component("devRepository")
public class JpaRepository implements MyRepository {}
@Service
public class MyServiceImpl implements MyService {
    @Autowired   
    @Qualifier("devRepository")    
    MyRepository myRepository;
}

If no unique bean can be found by type and no @Qualifier is to be used, Spring tries to find a matching bean by name = bean-id.

These examples will look for a bean with id “sampleBean”:

@Autowired
private MyBean sampleBean;
@Autowired
public void setSampleBean(MyBean myBean) {
    ...
}
@Autowired
public void someValue(MyBean sampleBean) {
    ...
}

Multiple configuration files

You can split your configuration into several classes and combine them in your main-configuration file by using

@Import({Configfile1.class, Configfile2.class})

You could even combine XML-configuration with your configuration-class, using @ImportResource:

@Configuration
@ImportResource({"classpath:ch/pma/application-config.xml", "file:/home/pma/application-config.xml"})@Import(Configfile1.class)public class MyConfig { ... }

A bean reference from another configuration-file can be obtained by using

@Autowired

allowing you to separate your “application” beans from your “infrastructure” beans.

When using @Autowired, a unique dependency of the correct type must exist as long as “(required=false)” is not added to the annotation. With Java 8 and Lambdas, optional autowiring can be defined and used like follows:

@Autowired
Optional<YourService> yourService;
 
public void useService() {
    yourService.ifPresent( s -> {
        //s is the instance of YourService
    });
}

Example

@Configuration
@Import(InfrastructureConfiguration.class)
public class ApplicationConfiguration {
 
    @Autowired
    DataSource dataSource;
 
    @Bean
    public MyRepository myRepository() {
        MyRepositoryImpl myRepository = new MyRepositoryImpl();
        myRepository.setDataSource(dataSource);
        return myRepository;
    }
}
@Configuration
public class InfrastructureConfiguration {
 
    @Bean
    public DataSource getDataSource() {
        ...
    }
}

In general you can use

@Autowired

on a constructor, method or (even on a private) field. Some say, using it on a field is bad practice though (see http://olivergierke.de/2013/11/why-field-injection-is-evil/).

It is not illegal to define the same bean more than once (hence in different configurations), you will get the last bean Spring sees defined.

ApplicationContext

Spring application contexts can be bootstrapped in any environment, you can even use more than one context, e.g. one for the repository-domain of your application and separate ones for different web-endpoints.

Use your configuration when setting up the application context and retrieve beans from it to work with.

Example

public class MyApp{
 
    private ApplicationContext applicationContext;
 
    public static void main(String[] args) {
 
        new MyApp();
    }
 
    public MyApp() {
 
        initApplicationContext();
        MyController myController = applicationContext.getBean(MyController.class);
        myController.getUserById(1);
        closeApplicationContext();
    }
 
    private void closeApplicationContext() {
        ((ConfigurableApplicationContext) applicationContext).close();
    }
 
    private void initApplicationContext() {
        ApplicationContext applicationContext = new AnnotationConfigApplicationContext(MyConfiguration.class);
        this.applicationContext = applicationContext;
    }
}

Bean scope

Possible scopes are

  • singleton
  • prototype (instantiates a new bean every time the bean is referenced)
  • request (foreseen for usage in web-environments)
  • session (foreseen for usage in web-environments)
  • custom (define your own name and rules for this scope)

Singleton is the default scope, meaning that multiple calls to

applicationContext.getBean("...")

always returns the same instance. For using another scope, the bean has to be annotated with

@Scope("[desired_scope]")

Bean lifecycle in an application context

Creation:

In general:

  • Bean definitions loaded
  • Post process bean definitions (BeanFactoryPostProcessor; e.g. replacing property placeholders with their actual value by a PropertySourcesPlaceholderConfigurer. These beans must be defined as static methods, to ensure their early presence.)

For each bean:

  • Instantiate bean (eagerly by default, using lazy is not recommended)
  • Populate properties (dependency injection)
  • setters for bean-factory and application-context
  • BeanPostProcessor postProcessBeforeInitialization
  • afterPropertiesSet()
  • custom init-method
  • post-construct or BeanPostProcessor postProcessAfterInitialization, depending on the order-property

ApplicationContexts autodetect BeanPostProcessor beans in their bean definitions and applies them to any beans subsequently created.

BeanPostProcessors are used e.g. for proxying the bean, for example to handle transaction management.

Plain bean factories allow for programmatic registration of post-processors, applying to all beans created through this factory. For standard usage you will not need a BeanFactory though, as the application context extends the BeanFactory.

Destruction:

Is completed when an application context is closed

If available:

  • @PreDestroy or @Bean(destroyMethod=”…”) / destroy-method=”…” (XML)
  • destroy() (if DisposableBean)

Actual destruction of objects will follow when the garbage collection runs for the next time.

External Properties

Getting properties from the Environment

The environment automatically provides access to System properties and Environment variables.

Use Environment explicitly:

@Autowired
public Environment env;
...
env.getProperty("db.url");

or implicitly by using “${…}” placeholders:

@Value("${db.url}") String dbUrl

For undefined values, an alternative value can be provided using a colon:

@Value("${db.connectionLimit:10}")private int connectionLimit;

Property Sources

@PropertySources("[classpath:|file:|http:]/my_props.properties")

is used for referencing other sources but requires a bean PropertySourcesPlaceholderConfigurer-bean:

@Bean
public static PropertySourcesPlaceholderConfigurer propertySourcesPlaceholderConfigurer() {
    return new PropertySourcesPlaceholderConfigurer();
}

Your property-source path can also contain ${…}-placeholders, which are resolved against existing system properties and environments variables.
This way, you could provide different property-files for different environments like dev, test or prod.

SpEL

Spring expression language can be used to resolve more complex expressions than just property-names.

Examples:

@Value("#{systemProperties['db.url']}") 
String dbUrl;
 
@Bean 
public DataSource dataSource(@Value("#{systemProperties['db.url']}") String url, …) {
    DataSource dataSource = new DataSource();
    dataSource.setUrl(url);
    return dataSource;
}

systemProperties and systemEnvironment are available by default.

It is also possible to access public fields and/or getters of beans:

@Value("#{dataSource.url}") 
String dataSourceUrl;

SpEL also supports the Elvis-operator, to shorten the usage of a ternary operator with a null-check:

ExpressionParser parser = new SpelExpressionParser();
String testName = null;
String actualName = parser.parseExpression("testName ?: 'Unknown'").getValue(String.class);

In general, SpEL is way more powerful than what’s shown in these few examples. Read this: http://docs.spring.io/spring/docs/current/spring-framework-reference/html/expressions.html for more insight on the topic.

Profiles

Beans can be grouped into profiles by using

 @Profile("[desired_profile_id]")

on the configuration or on the bean.

Beans not grouped by a profile are always active.

Profiles need to be activated to be usable. Ways to do so are:

  • For testing: @ActiveProfiles
  • Web-Application: Setting context-param name spring.profile.active and value to the desired profile(s) in the web.xml
  • System property: -Dspring.profiles.active=profile1,profile2
  • Setting the spring.profiles.active system property programmatically before accessing the application context

Proxying

When using Java config, Spring proxies @Configuration using cglib. It creates a subclass of the

@Configuration
public class MyConfig {
...
}

named

public class MyConfig$$EnhancerByCGLIB$ extends MyConfig {
...
}

When referencing a singleton bean, this class first checks if it already existis in the application-context and returns it if so. Otherwise it calls the super-class to get the bean, stores in the application-context and returns it.

A BeanPostProcessor may wrap any bean with a proxy adding behaviour transparently. For example, your calls to your @Transactional RepositoryService are routed through a TransactionInterceptor who begins and commits (or rolls back) the transaction.

Spring uses JDK and CGLib to set up proxies. By default, all interfaces are proxied using dynamic (JDK) proxies. CGLib, which is included in the Spring .jars, is used when no interface is available as long as the class or method is not declared final.

Therefore, a JDK proxy will implement the interface, while the CGLib proxy will extend the class to be proxied.

Comparison between Java config and XML

Java config XML
Profile
@Profile("dev")
public class DevClass {}
<beans profile="dev">
</beans>

You can nest -tags within the parent-beans-tag to separate beans
into several profiles

Bean ID
@Bean
public MyService myService() {}
//id = methodname

or

@Bean(name="myService") //id = name
public MyService someService(){}
<bean class="ch.pma.myapp.services.MyService" id="myService"/>
Dependency injection
MyService myService = new MyServiceImpl();
myService.setDependency(myDependency());
<bean id="myService">
  <property name="dependency" ref="myDependency"/>
  <!-- implicit reference to setter setDependency(...) -->
</bean>

For some types, properties can be set using

<property name="someProperty" value="..." />
or
<property name="someProperty>
  <value=>...</value>
</property>

instead of ref=””. These types are:

  • Numeric types
  • BigDecimal,
  • boolean: “true”, “false”
  • Date
  • Locale
  • Resource
Constructor injection (unlike for setter-injection, constructor-arguments can not be treated as optional)
MyService myService = new MyServiceImpl(myDependency());
<bean id="myService">
  <constructor-arg ref="myDependency"/>
</bean>

constructor-arg elements can be in any order, to indicate your intended order when passing
ambigous values, use index (starting from 0)

<constructor-arg index="0" value="123">
<constructor-arg index="1" value="some string">

or set the name property, e.g.

name="intValue"

matching the name of the constructor-argument to omit the index.

Import
@Configuration
@Import(Config1.class)
public class MyConfig { ... }
<beans>
  <import resource="config1.xml" />
<beans>

Relative path is used by default. Alternatives are file, classpath, http.

Bean behaviour
Annotations require annotation-driven or the component-scanner to be activated

@PostConstruct
<bean ... init-method="init">
  ...
<beans>
@PreDestroy
<bean ... destroy-method="destroy">
  ...
<beans>
@Scope(...)
<bean ... scope="...">
  ...
<beans>
@Lazy("true")
<bean ... lazy-init="true">
  ...
<beans>
Component scan
@ComponentScan({"ch.pma.myapp.package1",
"ch.pma.myapp.package2"})
<context:component-scan 
base-package="ch.pma.myapp.package1, ch.pma.myapp.package2"/>

XML Namespaces

The default namespace in a Spring configuration file is typically the “beans” namespace:

<beans xmlns="http://www.springframework.org/schema/beans" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd>

Various framework functionality (aop, tx, context, etc.) can be accessed by adding the corresponding predefined namespace. This makes advanced features of Spring easy declarable.

For example,

<code><span class="atn"><beans ...
  xmlns:tx</span><span class="pun">=</span><span class="atv">"http://www.springframework.org/schema/tx"
  ...
  </span><span class="atn">xsi:schemaLocation</span><span class="pun">=</span><span class="atv">"http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd 
  ...">
  ...
  <tx:annotation-driven />
  ...
</beans></span></code>

hides a couple of bean-definitions for enabling the configuration of transactional behavior based on annotations.

Redstone for Beginners – Sorting Machine

André and I are working on a fancy new housing on the survival instance on his server and it’s becoming obvious that we need a sorting-system for all the good stuff we are getting out of the ground.

So I watched this, tried to build it – and failed.

There were comments saying that it would not work with V 1.8.x but I did not believe that.
The major problem was that I did not understand how the components actually work, therefore I could not debug the system.

Having looked at the Minecraft-Wiki, I realized that it is quite easy to understand. I rebuilt it and now it works – with a small restriction (it is a little annoying that it was not pointed out in the video) described on the bottom of this post .

Here is what you want to achieve: Put random items in a box and let them be delivered to another box of your choice. The transport from the original box to the destination is done via hoppers.

Things to know about the hopper:

  • In general, they will suck items out of dispensers (like boxes or other hoppers) placed above of them for as long as they can
  • In general, if their outlet is connected to a device that can store items, they will pass on everything that is put into them until the target is full. If the outlet is not connected to anywhere, they will not pass items on.
  • They possess 5 item slots of their own. If all slots contain items, the hopper will accept further items of the same kind only. The slots are being emptied from left to right.
  • They send a redstone signal if they have items in their slots. The signal strength is based on the percentage of the fill level – it goes up for every 1/14 of filling. To find out what signal strength your hopper is sending, you need to know the max storage capacity, which again is based on the items you put into the hopper.
    E.g. if you put blocks of iron and dirt into the hopper, the max amount of blocks would be 5 x 64. Floor 1/14 of that is 22, so the signal strength would be 0 for 0 blocks total, 1 for 1 – 22 blocks total, 2 for 23 – 45 blocks total and so on. However, if you use eggs, the total capacity is only 5 x 16 as eggs can only be stacked to 16. 1/14 capacity would then be only 5 eggs.
  • If they receive a redstone signal, e.g from a redstone torch underneath, they will block their in- and outlet

You probably see where this is leading to. We want to have one hopper per material we want to sort (placed in the leftmost slot). To achieve that, we need to pre-fill the sorting hoppers. To pre-fill them, we also need to define one item that we never must put into the machine for being sorted – which is dirt in my case.

Hopper setup front view:

Hopper setup side view:

We need to pre-fill the sorting hoppers with the correct amount of blocks, so that one added block will change the generated signal-strength from 1 to 2. Huh, why is that? Because the bottom level hopper which sucks items out of the sorting hopper above and puts them into the box is deactivated by a redstone torch, so that he does not suck out our pre-filling materials.
If the signal strentgh of the sorting hopper changes, we know that we have to activate the bottom-level hopper. This is done by sending a redstone signal to the redstone torch – for as long as the sorting hopper sends signal strength > 1.

Processing the hopper’s signal

For that we’re going to need the comparator; it has three inputs, one output and a switch.

Basically it does this to calculate the output:

w = y > z ? y : z;
out = x >= w ? (A ? x - w : x) : 0;

For the sorting machine we can use the sorting hopper as the only input x, so that the comparator will just pass on the signal received – with unchanged strength.

Next, we want a sticky piston to activate on signal strength 2. It has a redstone block attached which sends a signal to the appropriate redstone torch and deactivating it, while the piston is extended.
We achieve this by putting exactly 2 blocks between the piston and the comparator and connecting them via redstone, because the signal is diminishing by 1 per redstone wire.

Here it is important to not let a wire interfere with others. That’s what the nifty placing of blocks in the area between comparators and pistons is for. On the floor, mostly comparators are used because they will not compromise each other sideways.

Pistons and comparators on the floor:

Non-interfering redstone-wires:

Most probable sources of error if you are facing problems

  • Hoppers and/or comparators are not facing the correct way
  • Redstone wires interfere with each other on their way to the pistons
  • Incorrect amount of pre-filling in the sorting hoppers

Also, for some components the correct order of placement can be kind of important, e.g. if you prefill your sorting hopper and then put the bottom hopper underneath it without having the redstone torch in place already, it will suck your sorting hopper empty.

About the boxes

To put to small boxes next to each other, every second box has to be a Trapped box instead of a normal one.

Final thoughts

This should enable you to not only build the sorting machine as shown in the video but also understand it and its components – giving you a good base for future Minecraft automation projects.

If you try out the machine, you will notice the small drawback I mentioned in the beginning: The bottom hopper will always keep the last item processed, because the redstone torch underneath is reactivated before the hopper has been emptied completely. I thought about the problem and currently found no easy way to fix that.

How To Configure IPSec VPN on pfSense For Use With iPhone, iPad, Android, Windows and Linux

Info: After having performed the pfSense upgrade from version 2.1.5 to 2.2 I am no longer able to connect with iPhones to the VPN endpoint. I cannot say what exactly the issue is right now. But as the pfSense people have switched from racoon to strongSwan, there seem to be some significant changes under the hood. I am sorry to say, but this guide is no longer applicable to the current version of pfSense. As soon as I find time to investigate this issue, I post updates here.

Just some side notes: The VPN client in IOS 8 now supports IKEv2, but this feature has not been yet made available in the UI of the VPN client. There is a tool called “Apple Configurator” which can be used to setup a VPN profile which supports IKEv2. pfSense also supports IKEv2 now (since switched to strongSwan).

If anyone gets this thing working again, I am highly interested. Thank you for letting me know.

1. Introduction

I own a pfSense Box myself which runs on an APU1C4 board from PC Engines. I use it for firewalling and as VPN endpoint for various client devices such as iPhones, iPads, Android phones and tablets, Windows PCs and Linux boxes. In this article I want to share my experience in turning your pfSense box in a device which acts as an IPsec VPN endpoint.

2. Goals

My main goals were:

  • Mobile devices should be able to connect to my pfSense box and make use of IPsec full-tunneling, which means ALL traffic runs through my pfSense box. This is especially useful if you’re located outside your country and want to access content, which is accessible from domestic IP addresses only.
  • I also want to access my private LAN in order to manage my systems, access to my file shares and other resources.

So far, no special goals. Let’s move on.

3. System Environment

3.1 My pfSense Box

My pfSense is running on version 2.1.5-RELEASE (amd64) built on Aug 25 07:44:45 EDT 2014 having FreeBSD 8.3-RELEASE-p16 under the hood. The box is driven by an ALIX APU1C4 Mini-ITX mainboard bought from PC Engines GmbH in Switzerland. The board has some nice hardware specs such as 4 gigs of RAM, an AMD G-T40E dual-core processor and gigabit ethernet network interfaces. The ideal playground to provide VPN connectivity on an embedded device. The only (possible) drawback is, that the OS is running from an SDcard in my case. But you don’t have to. There are also some SSD mSATA-modules available which allow you to run your OS from an SSD.

3.2 Clients

I have tested client connectivity using the following devices:

Device Model No. OS Version VPN Client
Google Nexus 7 Table K009 D80KBC139568 Android 4.4.3 Default
Apple iPhone 5s A1533 iOS 7.1.2 Default
Apple iPhone 5s A1457 iOS 7.1.2 Default
Apple iPhone 4 A1332 iOS 7.1.2 Default
Apple iPad Mini A1432 iOS 7.1.2 Default
Apple iPad 3 A1430 iOS 7.1.2 Default
Apple iPad 2 A1396 iOS 7.1.2 Default
Apple MacBook Pro A1398 MacOS X 10.9.4 Default
Lenovo X201 4290-N77 Windows 8 Shrew Soft VPN Client
Lenovo X200 7458-E46 Linux Mint 16 vpnc

Update: I have tested the configuration on an iPad running on iOS 8.1.2 as well. Detailed test results follow soon. Please bear with me.

Please note, that I have used the vendor-supplied default VPN clients for all Apple and Android devices. There was nothing to install at all. For Windows, I have used the Shrew Soft VPN client 2.2.2-release build dated Jul 01 2013. For Linux systems, I have used the vpnc package, a command-line VPN client, running on version 0.5.3r512.

4. pfSense Configuration

Log in to your pfSense box and select VPN -> IPsec. Go to the Tunnels tab and make sure Enable IPsec is checked. Then, add a phase 1 entry and make sure, the following values are set:

Section Setting Value
General Information Disabled Unchecked
Internet Protocol IPv4
Interface WAN
Description (empty)
Phase 1 proposal (authentication) Authentication method Mutual PSK + Xauth
Negotiation mode aggressive
My identifier My IP address
Peer identifier Type: Distinguished name
Value: <identifier>
Pre-Shared Key <pre-shared secret>
Policy Generation Unique
Proposal Checking Default
Encryption algorithm AES 256 bits
Hash algorithm SHA1
DH key group 2 (1024 bit)
Lifetime 86400 seconds
Advanced Options NAT Traversal Enable
Dead Peer Detection Unchecked

In my case, I have choosen vpnusers as value for <identifier>, but you can choose whatever you like. Just choose some simple to remember name here. Once it works, do not forget to choose something stronger. Save your settings and go back to the VPN -> IPsec menu. Now, add a phase 2 entry to the already existing phase 1 entry having the following values set:

Section Setting Value
General Information Disabled Unchecked
Mode Tunnel IPv4
Local Network Type: LAN subnet
Description (empty)
Phase 2 proposal (SA/Key Exchange) Protocol ESP
Encryption algorithms AES 256 bits
Hash algorithms SHA1
PFS key group off
Lifetime 28800 seconds
Advanced Options Automatically ping host (empty)

Again, save your changes and go back to VPN -> IPsec menu. Now select the Mobile clients tab and make sure the following values are set as follows:

Section Setting Value
IKE Extensions Enable IPsec Mobile Client Support
Extended Authentication (Xauth) User Authentication Source: Local Database
Group Authentication Source: system
Client Configuration (mode-cfg) Virtual Address Pool Provide a virtual IP address to clients: Checked
Network: 192.168.111.0/24
Network List Provide a list of accessible networks to clients: Unchecked
Save Xauth Password Allow clients to save Xauth passwords: Checked
DNS Default Domain Provide a default domain name to clients: Checked
Value: localdomain
Split DNS Provide a list of split DNS domain names to clients: Unchecked
Value: (empty)
DNS Servers Provide a DNS server list to clients: Checked
Server #1: 8.8.8.8
Server #2: (empty)
Server #3: (empty)
Server #4: (empty)
WINS Servers Provide a WINS server list to clients: Unchecked
Server #1: (empty)
Server #2: (empty)
Phase 2 PFS Group Provide the Phase 2 PFS group to clients: Unchecked
Group: off
Login Banner Provide a login banner to clients: Checked
Value: (Whatever text you like)

Save your changes. Now go to System -> User Manager and select the Group tab. Add a new group called vpnusers. Make sure, the group has the privilege User – VPN – IPsec xauth Dialin set. Save it. Now go to the Users tab and create a user which will later be used to connect to your VPN box. Make sure the user has the group vpnusers set.

Now we need to open the firewall to allow VPN connections to pass through. Go to Firewall -> Rules and select the WAN tab. Configure the following rules:

Proto Source Port Destination Port Gateway Queue Schedule Description
IPv4 UDP * * * 500 (ISAKMP) * None (empty) IPsec
IPv4 UDP * * * 4500 (IPsec NAT-T) * None (empty) IPsec

Select the IPsec tab and add a rule which allows all traffic to go through the VPN connection:

Proto Source Port Destination Port Gateway Queue Schedule Description
IPv4 * * * * * * None (empty) Allow all

5. Configuring Client Devices

5.1 Configuring Your iPhone

In order to get your iPhone, iPad or MacBook running, just enter the following parameters:

Parameter Value
VPN Type IPsec
Description <Description>
Server <IP/hostname of your VPN endpoint>
Account <user>
Password <password>
Group <identifier>
Shared Secret <pre-shared secret>
Proxy Off

5.2 Configuring Your Android Device

Parameter Value
Name <Description>
Type IPSec Xauth PSK
Server address <IP/hostname of your VPN endpoint>
IPSec identifier <identifier>
IPSec pre-shared key <pre-shared key>

You will be prompted for username and password as soon as you try to connect to your VPN endpoint.

5.3 Configuring Your Windows PC

On Windows, I use the Shrew Soft VPN client. The current version is 2.2.2. The configuration options I use are as follows:

Tab Section/Tab Setting Value
General Remote Host Host Name or IP Address <IP/hostname of your VPN endpoint>
Port 500
Auto Configuration ike config pull
Local Host Adapter Mode Use a virtual adapter and assigned address
Obtain automatically Checked
MTU 1380
Client Firewall Options NAT Traversal enable
NAT Traversal Port 4500
Keep-alive packet rate 15
IKE Fragmentation enable
Maximum packet size 540
Other Options Enable Dead Peer Detection Checked
Enable ISAKMP Failure Notifications Checked
Enable Client Login Banner Checked
Name Resolution DNS Enable DNS Checked
Obtain Automatically Checked
Obtain Automatically (DNS Suffix) Checked
WINS Enable WINS Unchecked
Authentication Authentication Method Mutual PSK + XAuth
Authentication Local Identity Identification Type User Fully Qualified Domain Name
UFQDN String <identifier>
Remote Identity Identification Type IP Address
Address String (empty)
Use a discovered remote host address Checked
Credentials Server Certificate Autority File (empty)
Client Certificate File (empty)
Client Private Key File (empty)
Pre Shared Key <pre-shared key>
Phase 1 Proposal Parameters Exchange Type aggressive
DH exchange group 2
Cipher Algorithm auto
Cipher Key Length (empty)
Hash Algorithm auto
Key Life Time limit 86400 seconds
Key Life Data limit 0 Kbytes
Phase 1 Enable Check Point Compatible Vendor ID Unchecked
Phase 2 Proposal Parameters Transform Algorithm auto
Transform Key Length (empty)
HMAC algorithm auto
PFS Exchange disabled
Compress Algorithm disabled
Key Life Time limit 3600 seconds
Key Life Data limit 0 Kbytes
Policy IPSEC Policy Configuration Policy Generation Level auto
Maintain Persistent Security Associations Unchecked
Obtain Topology Automatically or Tunnel All Checked
Remote Network Resource (empty)

5.4 Configuring Your Linux PC

I use vpnc as a VPN client on Linux. Mine is a Linux Mint box, but vpnc should also be available on Ubuntu and Debian systems. It is command-line based and works pretty well. Install it using the command

sudo apt-get install vpnc

After that, navigate to /etc/vpnc/ and create a copy of the default.conf configuration file, for example:

cp default.conf my-vpn.conf

Edit the newly created file and fill in the parameters like this:

IPSec gateway &lt;IP/hostname of your VPN endpoint&gt;
IPSec ID 
IPSec secret 
IKE Authmode psk
Xauth username 
Xauth password

<identifier> and <pre-shared secret> are the values choosen earlier during pfSense configuration. and are the values entered for the user in pfSense user manager. To connect using vpnc, just enter the following command:

sudo vpnc /etc/vpnc/my-vpn.conf

If you would like to disconnect later, just enter the following command to restore the previous routing configuration:

sudo vpnc-disconnect

6. Final Thoughts

As always, I cannot claim that this tutorial is perfect. Therefore I am more than happy to hear from you, if there is something wrong with this tutorial. Contact information is provided on the web site. But for now, let’s get started.

How to Enable Sonic Ether’s Unbelievable Shaders for Minecraft 1.7.10

Introduction

Sonic Ether’s Unbelievable Shaders is an awesome shader pack which adds dynamics and realism to your minecraft world, such animated leaves on trees, ultra-realistic water textures and animations, environmental effects and dynamic lighting. It is truly amazing, what this pack offers. This short tutorial shall help you getting Sonic Ether’s shaders to work with your Minecraft client on a Windoze box. Please note, that this tutorial modifies your local client setup of Minecraft. Therefore, it is advised to create a backup copy before. There is a .minecraft folder in C:\Users\\Roaming. Probably the most important folder. As for this document, I will just refer to it as the .minecraft folder.

In case you do not really know what Sonic Ether’s Unbelievable Shaders are, have a look at this video:

But for now, let’s get started by installing all the componentes required in order to get this thing up and running.

Installing the MinecraftForge API

As a first step, navigate to http://files.minecraftforge.net and download the latest version in the “1.7.10 Downloads” section. Please click on the asterisk link, not the “Installer-Win” link. If you do nevertheless, you will get some annoying adf.ly advertisements which truly suck. So, just click on the asterisk to the right of the Installer-Win link and you’re fine.

Now, double-click the binary and choose the “install client” option in the installer dialog. Then click OK.

After running through MinecraftForge has been installed. Please note the new Minecraft launcher profile that gets automatically created for you, called “Forge”.

Use this whenever playing Minecraft to make sure MinecraftForge is activated.
For now, run your Minecraft client at least once using the “Forge” profile. you need to login once again, since it is a new profile. After this, close Minecraft and navigate to your .minecraft folder. You will notice a new folder called “mods”. Now we are ready to install our first mod, called “OptiFine” which is recommended, when using Sonic Ether’s shader pack.

Installing the OptiFine Mod

Navigate to https://optifine.net/downloads and download click on the download link for “OptiFine 1.7.10 HD U A4”.

The file behind is actually a JAR archive. Just copy this file to you .minecraft\mods folder. Then start your Minecraft client again using the “Forge” profile, as mentioned before. In the main screen, click on the new “Mod” menu and verify that the OptiFine mod is contained in the list of installed mods. Good for now, quit Minecraft again. Now we continue to install the GLSL Shaders mod.

Installing the GLSL Shaders Mod

Go to the Minecraft Forums on http://www.minecraftforum.net and enter the search term “shaders mod karyonix”. in the search results choose the appropriate post (see image below)

Now click on download link1 in the “Minecraft 1.7.10” section to download the JAR archive. After clicking through some ads, you can download the JAR archive called “ShadersModCore-v.2.3.18-mc1.7.10-f1179.jar” (or newer). Copy this file to your .minecraft\mods folder. Restart your Minecraft client (using the “Forge” profile). If you click on “Options…” in the main screen you should see a new button called “Shaders…”.

Also, there was a folder created automatically called “shaderpacks” in your .minecraft folder. If all this applies, you’re fine, and we can proceed to the next step.

Install Sonic Ether’s Unbelievable Shaders

As a next step, we need to install the shader pack itself. This is pretty straight-forward. Just go to http://www.minecraftforum.net and search for “sonic ether shaders”. Click on the appropriate post, written by “sonicether”. Scroll down to the “Latest” section and choose one of the SEUS versions to download. I chose the Ultra version. The download is in fact a ZIP file, which contains a “shaders” sub-directory. Create a new folder called “SEUS v10.1 Ultra” in your .minecraft\shaderpacks folder. Copy the whole “shaders” directory from the ZIP file to .minecraft\shaderpacks, so the final directory structure looks like .minecraft\shaderpacks\SEUS v10.1 Ultra\shaders\…”. If you don’t do it this way, the shaders won’t work. If you’re done, proceed to the last step.

Activate the Shaders

Fire up Minecraft and go to “Options…”, then “Shaders…”. On the left side, click on “SEUS v10.1 Ultra”, then make sure “CloudShadow” is set to “Off”, “tweakBlockDamage” is set to “On” and “OldLighting” is set to “Off”.

Save everything and you’re set! Now you can enjoy Minecraft in a totally different way! Have fun.

Book Review “Crypto”

Introduction
Crypto is a book aimend at readers, who want to get an overview on the origin of public key cryptography. Back in the Seventies, researches such as Whitfield Diffie and Martin Hellman solved some fundamental problems of cryptography, such as key exchange. Later on, public key cryptography was invented.

Contents
The book begins at the time, when Whitfield Diffie made a break-trough in cryptography by inventing a new method of sharing symmetric keys between two parties – the Diffie-Hellman key exchange protocol. The book also describes the problems that existed related to the government. The NSA’s aim was to keep all of the crypto-knowlegde private. They didn’t want others to work on such things or rather make crypto-stuff public. Therefore they established strict export regulations for cryptography. Only low-strength ciphers were allowed to be exported. breaking these rules was sanctioned with prison. But the NSA also had a vast interest in surveilling their own citizen. A prove of this was their introduction of the Clipper-chip, loaded with a cipher created by the NSA in phones. This was in fact a government back door. It was a tough time for scientists, but thanks to people like Phil Zimmerman and its software “PGP” and others, cryptography, and especially public key cryptography, was made available to the masses. Therefore, the government was later on forced to revoke their export regulations.

Recommendation
To sum up, the book is definitive a good read if you are interested in the beginning of public key cryptography. Read it, if you want to understand the problems scientists and researchers had to cope with during that time. It also gives you a pretty good insight in how government handles certain things. The book describes dialogs between persons very detailled and describes things in great detail.

How to Develop TI Voyage 200 Programs on Windows 7 64-Bit Using TIGCC

After having spent endless hours on trying to get TIGCC on Linux to work, I finally gave up and tried, as a last resort, to setup a working development environment on Windows 7. In this rather short tutorial I am going to explain how you can setup a working development environment on your machine, compiler your first program and finally execute it on your calculator. As mentioned above, I had some troubles getting a stable environment up and running on linux. Therefore, if you are aware of a working configuration I would be very glad to hear from you. Thank you very much. But for now, let’s get started.

Required Hardware

Of course you need a Texas Instruments TI Voyage 200 calculator [4]. For what we’re going to do here, the OS version doesn’t really matter, but just to let you know: I am using version 3.10. This is also the latest ever published version by Texas Instruments.

You also need a communications cable to connect your calculator to your PC. I use the Silver USB Cable for Windows/Mac [5]. This cable comes along with a CD and some drivers.

The software they ship is called “TI Connect”. The version I am using is Version 4.0.0.218. In case you have an older version, download the latest edition from their website. My computer is a Windows 7 64-Bit Professional Edition machine. That’s all you need in terms of hardware.

Setting up TI Connect

I think a good step to start is to get the connection between your calculator and your pc up and running. Therefore, we begin by installing the TI Connect software. This should be a straight-forward process. Just run the provided installer. You’ll notice that some drivers will be installed. Connect your calculator with your computer by using the cable. Fire up your TI Connect software (menu entry should be called “TI Connect” as well).

Try to make a first screenshot of your calculator by clicking on the “TI ScreenCapture” button. If this works, great. The basic connection seems to be working. That’s all we need to know by now. We come back to this software later. You can close it for now.

Setting up TIGCC

Next, we setup our development environment. The IDE of choice is TIGCC Version 0.95 which you can download from [1]. The installation on Windows is a simple process too and I had no issues getting this to work. Just download the zip file and run the installer.

Setup up the Emulator

In order to save battery life on your calculator, it makes much sense to install a software emulator on your computer. Here, I am using TiEmu 3.03 which is available from [2]. TiEmu requires the installation of the GTK runtime. I had some weird problems getting TiEmu to work using the latest GTK runtime (I think it was version 2.24). Use version 2.16.6-2010-05-12 instead (grab it from [3]). TiEmu comes along with some PedROM images which you can use.

Writing a “Hello World” Program for your TI

Fire up TIGCC and start a new project by creating a new source file. First make sure, the IDE is configured to generate a binary for your calculator. Click on “Project” -> “Options” -> Tab “Compilation” -> Click “Program Options…”, Tab “Calculator”. Make sure “V200” is selected at least. Now copy and paste the code below and click “Project” -> “Build” to compile the program.

#include <tigcclib.h>
 
void _main(void) {
  // clear the screen
  ClrScr();
 
  // print the string at the top left corner of the display
  DrawStr(0, 0, "Hello, World!", A_NORMAL);
 
  // wait for a key press before the program exits
  ngetchx();
}

If the build was successful, you should now find a .v2z file in the same folder as the source file is located. This is the compiled version of your program.

Transfer And Run Your Program to the Calculator

We now want to transfer this file to our calculator. Again, start up “TI Connect” and click on the “Send to TI Device” button. This allows you to transfer files to your calculator. I think this is pretty self-explanatory. Therefore, just go on and transfer the file. You can check if your file has been transferred successfully, if it appears in the “VAR-LINK” menu on your calculator (just press “2ND” -> “-“). It should be listed as ASM program. Remember the name (limited to 8 chars). Go back to the home screen and call the function as “hellowor()”. A blank screen displaying “Hello, World!” should appear such as shown in this picture:

What’s Next?

I have showed you the basic steps of developing simple C programs for your calculator. I hope, this tutorial shed some light into the jungle of TI Voyage 200 development. There are several good resources on the internet, which can help you gain knowledge in programming on the TI Voyage 200. Please check my links at the end of this article. For example, you could visit the ticalc.org FAQ page [6] or have a look at the official TIGCC documentation to see which commands are supported on the platform. If you have critics or suggestions regarding this tutorial, please let me know by writing a comment to this post. Thank you.

Links

[1] http://tigcc.ticalc.org/
[2] http://sourceforge.net/projects/gtktiemu/
[3] http://gtk-win.sourceforge.net/home/index.php/Main/Downloads
[4] http://education.ti.com/en/us/products/calculators/graphing-calculators/voyage-200/features/features-summary
[5] http://education.ti.com/en/us/products/computer_software/connectivity-software/silver-usb-cable-for-windows-mac/features/features-summary
[6] http://tigcc.ticalc.org/doc/faq.html