Maven repository on Bitbucket

There are number of cases when standalone repository management tool like Nexus is an overkill and all you need is a simple service for providing a few maven libraries. I use Bitbucket private Git repositories for such cases several years. Setup is pretty easy and all you have to do is to modify ~/.m2/settings.xml like below:

<profiles>
    <profile>
        <id>some-project</id>
	    <activation>
	        <activeByDefault>true</activeByDefault>
	    </activation>
        <repositories>
            <repository>
                <id>some-project-maven-repo</id>
                <url>https://api.bitbucket.org/1.0/repositories/username/git-repo/raw/releases</url>
            </repository>
        </repositories>
    </profile>
</profiles>

<servers>
    <server>
        <id>some-project-maven-repo</id>
        <username>user@exitcode.net</username>
        <password>normal_or_app_password</password>
    </server>
</servers>

This configuration uses access through Bitbucket REST API v1.0 where:

  • username is a your Bitbucker username or team name
  • git-repo name of the Git repository created for hosting maven libraries
  • releases name of the Git branch the libraries are stored in
  • user@exitcode.net is an email of the user that have an access to Git project (if you use App password to protect main Bitbucket password, you must use username instead of email)

This works pretty well as Bitbucket Cloud API 1.0 challenges authentication using HTTP 401 Unauthorized header while accessing secured content like private repository, when no credentials are given. Once challenged, Maven Wagon HTTP provider sends proper credentials stored in settings.xml to the server in form of Authorization header.

API 2.0

However, Bitbucket announced recently that Cloud API 1.0 is deprecated as of June 30 and what’s worse, all version 1.0 APIs will be removed permanently on 31 December 2018. So, all users are forced to switch to API 2.0. To access the same Git repository with maven artifacts you have to use following configuration:

<profiles>
    <profile>
        <id>some-project</id>
	    <activation>
	        <activeByDefault>true</activeByDefault>
	    </activation>
        <repositories>
            <repository>
                <id>some-project-maven-repo</id>
                <url>https://api.bitbucket.org/2.0/repositories/username/git-repo/src/releases</url>
            </repository>
        </repositories>
    </profile>
</profiles>

<servers>
    <server>
        <id>some-project-maven-repo</id>
        <configuration>
            <httpHeaders>
                <property>
                    <name>Authorization</name>
                    <value>Basic dXNlckBleGl0Y29kZS5uZXQ6cGFzc3dvcmQ=</value>
                </property>
            </httpHeaders>
        </configuration>
    </server>
</servers>

The tricky part is the authentication. You have probably noticed that Authorization header with basic authentication credentials (value user:password encoded in base64) is passed to the server directly. This is because API 2.0 no longer challenges authentication and throws directly HTTP 403 Forbidden if it did not find credentials stored in request. With this configuration Maven Wagon HTTP provider (actually underlying Apache Http Client) won’t wait for authentication challenge that never come and sends credentials immediately.

Maven HTTP communication logs

While I was debugging whats going on under the hood, I discovered that Maven Wagon HTTP Provider bundles entire Apache HTTP Client with other Apache libraries like commons into one fat jar. This is crucial information if you want to enable trace logs of the HTTP client because bundled client is placed in different java package than the original! So, if you want to enable trace logs of the HTTP communication that maven makes, add following line into ${MAVEN_HOME}/conf/logging/simplelogger.properties (create it if not exists):

org.slf4j.simpleLogger.log.org.apache.maven.wagon=TRACE

Foremost, I started with information that Maven since v3.0.4 uses Apache Http Client 4, so I was trying to enable HTTP trace logs on logger org.apache.http as described in official documentation. That did not work so I decompiled wagon-http-2.9-shaded.jar from maven 3.5.4 installation where I found this bundled client and after going through some sources I eventually enabled tracing on the right logger.

OutOfMemoryError email alert

When you profiling an application or looking for some memory leak taking heapdump is always essential. There are multiple ways how to do it manually. For instance using directly jmap command or some monitoring tool like VisualVM, JConsole, JMC etc. However when bad things happen in production it’s useful to be alerted and have a heapdump stored. For a such scenario JDK offers several useful command line options:

  • -XX:+HeapDumpOnOutOfMemoryError
  • -XX:HeapDumpPath
  • -XX:OnOutOfMemoryError

which could be used in a following manner:

-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/heapdumps -XX:OnOutOfMemoryError=/opt/app/send_alert_email.sh

I think above configuration is self explaining. When OutOfMemoryError occurs, JDK will automatically save heapdump of the application into /tmp/heapdumps folder. Note that, if the heapdumps directory doesn’t exist it won’t be created and the JVM stores dump into a new file called heapdumps instead. Because the file gets overridden every time the application throws java.lang.OutOfMemoryError you should create the target directory manually otherwise the dump from previous application startup would be last forever. Moreover, shell script send_alert_email.sh is called, that actually sends an email notification.

Below is a notification script I normally use for this purpose. It sends an empty email with subject 'Java App - OutOfMemory alert!' to email recipient@domain.com using pre-configured SMTP server with authentication.

#!/bin/sh
mailx -v -s 'Java App - OutOfMemory alert!' -S smtp=10.0.0.1:25 -S from=app@domain.com -S smtp-auth=login -S smtp-auth-user="smtp-user-name" -S smtp-auth-password="smtp-user-password" recipient@domain.com < /dev/null
echo "Alert email sent"

Script must be executable by an user the java application is running under. The above configuration is easy to test using some memory-leaking application. You can use this simple one:

public class OutOfMemoryTest {
    public static void main(String[] args) {
        StringBuilder sb = new StringBuilder();
        while (true) {
            sb.append(new String("test"));
        }
    }
}

Save the class into OutOfMemoryTest.java file, create /tmp/heapdumps folder, compile class and run built test application:

$ javac OutOfMemoryTest.java
$ java -Xmx5m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/heapdumps -XX:OnOutOfMemoryError=./send_alert_email.sh -cp . OutOfMemoryTest
java.lang.OutOfMemoryError: Java heap space
Dumping heap to /tmp/heapdumps/java_pid25781.hprof ...
Heap dump file created [3101545 bytes in 0.023 secs]
#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="./send_alert_email.sh"
#   Executing /bin/sh -c "./send_alert_email.sh"...
Alert email sent
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
	at java.util.Arrays.copyOf(Arrays.java:2367)
	at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
	at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
	at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415)
	at java.lang.StringBuilder.append(StringBuilder.java:132)
	at OutOfMemoryTest.main(OutOfMemoryTest.java:6)
$ ls /tmp/heapdumps
java_pid25781.hprof

You should have now an email about OutOfMemoryError in your mailbox.

Mapping namespace-less query parameters using Spring MVC in Liferay

When you work with portlet applications, all request parameters must be namespaced otherwise you won’t have access to their values using Portlet API in Liferay. However, in some cases your portlet has to process URL which was not constructed using Portlet API. In this post I will show you how to map un-namespaced query parameters declaratively using Spring MVC Portlet framework.

Imagine you have a single portlet deployed on your portal page and that portlet displays content of the article whose ID is passed as a query param, for instance:

https://news.portal.my/?id=123

That’s pretty standard requirement. URL like that is clear and easily bookmarkable. However, since parameter id is not namespaced, you portlet won’t have access to it through Portlet API. So the following code prints null into log:

@Controller
public class NewsController {

    private static final Logger LOG = Logger.getLogger(NewsController.class);

    @RenderMapping
    public void render(@RequestParam(value = "id", required = false) Integer id) {
        LOG.debug(id);
    }
}

In portal, namespace is unique for each portlet and it’s used as a prefix for all portlet’s parameters. This mechanism helps to avoid naming collisions when several portlets are deployed on the same page. For that reason you have to put portlet’s namespace into URL to make the prior code work:

https://news.portal.my/?p_p_id=news_WAR_portal&p_p_lifecycle=0&p_p_state=normal&p_p_mode=view&p_p_col_id=column-2&p_p_col_count=1&_news_WAR_portal_id=123

Now the URL is no longer short and clear. You would have to use Liferay’s friendly URL feature to simplify that although it would require additional XML configuration. However you have also a second option. You can access id parameter using Liferay API.

@Controller
public class NewsController {

    private static final Logger LOG = Logger.getLogger(NewsController.class);

    @RenderMapping
    public void view(RenderRequest req) {
        HttpServletRequest liferayServletReq = PortalUtil.getHttpServletRequest(req);
        String paramId = PortalUtil.getOriginalServletRequest(liferayServletReq).getParameter("id");

        LOG.debug(paramId);
    }
}

That worked fine, log revealed correct article number 123. But we can do better. We can wrap the above code into special WebArgumentResolver implementation and create additional annotation similar to @RequestParam to map un-namespaced query parameters to controller’s method arguments. Let’s call it @QueryParam. Using this approach we can utilise great annotation-based programming model in Spring.

@Target(ElementType.PARAMETER)
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface QueryParam {

    String value() default "";

    boolean required() default true;

    String defaultValue() default ValueConstants.DEFAULT_NONE;
}

Notice that the code is except annotation name exactly the same as for @RequestParam and it works almost equally. Now the WebArgumentResolver implementation:

public class QueryParamResolver implements WebArgumentResolver {

    @Override
    public Object resolveArgument(MethodParameter param, NativeWebRequest request) throws Exception {
        Assert.isInstanceOf(PortletRequest.class, request.getNativeRequest(),
                "You can use @QueryParam only in application running within a portlet container!");
        if (!param.hasParameterAnnotation(QueryParam.class)) {
            return UNRESOLVED;
        }

        return mapQueryParamToObject(param, (PortletRequest) request.getNativeRequest());
    }

    private Object mapQueryParamToObject(MethodParameter param, PortletRequest portletReq) {
        QueryParam queryAnnot = param.getParameterAnnotation(QueryParam.class);
        String queryParamName = queryAnnot.value();
        String queryParamValue = getServletRequest(portletReq).getParameter(queryParamName);
        if (queryParamValue == null) {
            Class<?> paramType = param.getParameterType();
            if (queryAnnot.required()) {
                throw new IllegalStateException("Missing parameter '" + queryParamName + "' of type ["
                        + paramType.getName() + "]");
            }
            if (boolean.class.equals(paramType)) {
                return Boolean.FALSE;
            }
            if (paramType.isPrimitive()) {
                throw new IllegalStateException(
                        "Optional "
                                + paramType
                                + " parameter '"
                                + queryParamName
                                + "' is not present but cannot be translated into a null value due to being declared as a "
                                + "primitive type. Consider declaring it as object wrapper for the corresponding primitive type.");
            }
        }

        WebDataBinder binder = new WebRequestDataBinder(null, queryParamName);
        return binder.convertIfNecessary(queryParamValue, param.getParameterType(), param);
    }

    private HttpServletRequest getServletRequest(PortletRequest request) {
        return PortalUtil.getOriginalServletRequest(PortalUtil.getHttpServletRequest(request));
    }
}

Only one step left. In order to map un-namespaced query parameter to render method parameter marked with @QueryParam annotation, we need to register resolver in our spring context.

    <bean id="annotationMethodHandlerAdapter" class="org.springframework.web.portlet.mvc.annotation.AnnotationMethodHandlerAdapter">
        <property name="customArgumentResolver">
            <bean class="org.exitcode.liferay.QueryParamResolver" />
        </property>
    </bean>

Finally, we can place our new annotation into controller:

@Controller
public class NewsController {

    private static final Logger LOG = Logger.getLogger(NewsController.class);

    @RenderMapping
    public void render(@QueryParam(value = "id", required = false) Integer id) {
        LOG.debug(id);
    }
}

Using declarative approach our code is now much more readable thanks to clear separation between the parts that describe the mapping (@QueryParam) and how the mapping is actually performed (QueryParamResolver).

Why Eclipse shows warning about synthetic accessor method

Recently one of my colleague came to me with interesting question about builder pattern he saw in Effectiva Java I borrowed him. He implemented the pattern in the same way like Joshua Bloch in his book but no matter what, Eclipse displayed weird warning message about synthetic accessor. I have explained him theory behind synthetic classes but I was not quite sure why Eclipse allows to raise a compiler warning in such cases. In this post I would like to touch that topic and find reasons why Eclipse does that.

The source code of mentioned builder looks like following:

public class NutritionFacts {

    private final int servingSize;
    private final int servings;
    private final int calories;
    private final int fat;
    private final int sodium;
    private final int carbohydrate;

    public static class Builder {

        private int servingSize;
        private int servings;
        private int calories;
        private int fat;
        private int sodium;
        private int carbohydrate;

        public Builder servingSize(int servingSize) {
            this.servingSize = servingSize;
            return this;
        }

        public Builder servings(int servings) {
            this.servings = servings;
            return this;
        }

        public Builder calories(int calories) {
            this.calories = calories;
            return this;
        }

        public Builder fat(int fat) {
            this.fat = fat;
            return this;
        }

        public Builder sodium(int sodium) {
            this.sodium = sodium;
            return this;
        }
        
        public Builder carbohydrate(int carbohydrate) {
            this.carbohydrate = carbohydrate;
            return this;
        }

        public NutritionFacts build() {
            return new NutritionFacts(this);
        }
    }
    
    private NutritionFacts(Builder builder) {
        this.servingSize = builder.servingSize;
        this.servings = builder.servings;
        this.calories = builder.calories;
        this.fat = builder.fat;
        this.sodium = builder.sodium;
        this.carbohydrate = builder.carbohydrate;
    }
}

This is most typical builder implementation. Immutable object NutritionFacts is created by invoking its private constructor from builder method and passing builder instance as parameter. So its kind of copy constructor which copies data from builder to the created instance, that can’t be modified further.

Problem

If you change compiler severity level from ignore (default) to warning for:

Access to a non-accessible member of an enclosing type

in Eclipse IDE (Preferences->Java->Compiler->Errors/Warnings->Code Style), you will see following warnings for builder above:

Access to enclosing constructor is emulated by a synthetic accessor method (line 26)
Read access to enclosing field is emulated by a synthetic accessor method (lines 31 – 36)

Syntethic constructs

Lets remind why java compiler must generate synthetic accessor methods. As you certainly know, private fields/methods are not visibile outside class. The same principle applies to nested classes as well, because on JVM level, there are no nested classes. Concept of nested classes is known only to Java language at source code level. Bytecode has no support for them. After compilation, they end up as standalone classes in their own files. Therefore on JVM, private fields/methods of enclosing class are not accessible for nested classes and enclosing class can’t access private stuff of nested classes (no matter if they are static or inner). However, Java supports access private fields/methods of nested class according JLS. So there are two opposite sides, Java language which syntax supports that and JVM which refuses to execute. For that reason java compiler must generate package-private synthetic methods, using which private stuff of enclosing class can be accessed from nested classes and vice versa, without any need to change the source code. Lets look how the generated synthetic constructor looks like:

/*synthetic*/ NutritionFacts(NutritionFacts.Builder x0, NutritionFacts$1 x1) {
    this(x0);
}

So generated synthetic package-private constructor calls the original private constructor. But how the new synthethic constructor gets called? That’s again work of the java compiler, which modifies content of build() method inside builder to following form:

public NutritionFacts build() {
    return new NutritionFacts(this, null);
}

As second constructor parameter is useless, compiler can safely putt null value in it. Similar generation is done also for private builder fields, that are read within private NutritionFacts constructor. Synthetic accessor methods look following:

    /*synthetic*/ static int access$600(NutritionFacts$Builder x0) {
        return x0.carbohydrate;
    }

    /*synthetic*/ static int access$500(NutritionFacts$Builder x0) {
        return x0.sodium;
    }

    /*synthetic*/ static int access$400(NutritionFacts$Builder x0) {
        return x0.fat;
    }

    /*synthetic*/ static int access$300(NutritionFacts$Builder x0) {
        return x0.calories;
    }

    /*synthetic*/ static int access$200(NutritionFacts$Builder x0) {
        return x0.servings;
    }

    /*synthetic*/ static int access$100(NutritionFacts$Builder x0) {
        return x0.servingSize;
    }

Of course, java compiler needs to also modify private constructor that must use generated synthetic methods to gain access to the private fields of builder:

    private NutritionFacts(NutritionFacts$Builder builder) {
        super();
        this.servingSize = NutritionFacts.Builder.access$100(builder);
        this.servings = NutritionFacts.Builder.access$200(builder);
        this.calories = NutritionFacts.Builder.access$300(builder);
        this.fat = NutritionFacts.Builder.access$400(builder);
        this.sodium = NutritionFacts.Builder.access$500(builder);
        this.carbohydrate = NutritionFacts.Builder.access$600(builder);
    }

If you are intersted how I get to these generated stuff above, please look at Sundararajan’s post that explain some hidden options of java compiler. Simply you need to invoke compiler with -XD-printflat option:

javac -XD-printflat NutritionFacts.java -d generated-src

With this command, java compiler generates source code which already passed some internal transformation (synthetic methods, assertion etc.) into generated-src directory. Note that you must create this directory before you run compiler.

Possible warning reasons

So now, when you mastered basics about synthetic constructs, lets back to the eclipse warnings. What are the possible unwanted implications of using synthetic constructs and why eclipse probably provides such alerts to be shown?

  • Performance penalty – almost negligible on modern JVM if you look at syntetic code that compiler generated
  • Security breach – generated package-private synthetic constructor can be accessed via reflection and thus create an instance of class, which was not intended to be created from the outside on source code level. You have to take this into account when you model your class a you rely on synthetic accessors

Conclusion

On the builder example I tried to demonstrate how java compiler generates synthetic constructs. Fortunately, Eclipse doesn’t enforce such warnings and it ignores them by default, until you state otherwise. Personally, I would continue to take advantage of synthetic methods for nested classes as they lower number of code that needs to be written otherwise. Second reason is that you maintain private stuff private and non-accessible without using reflection.

Interface vs annotation driven events

Few days ago I was on a crossroad to design and implement simple notification system for one project. I was decided to go with standard interface-driven setup as common approach in Java since beginning. However after some googling I came accros another very interesting solution, EventBus. It’s a simple, annotation-driven event utility that’s part of Google Guava library.

EventBus allows publish-subscribe-style communication between components without requiring the components to explicitly register with one another (and thus be aware of each other). It is designed exclusively to replace traditional Java in-process event distribution using explicit registration. It is not a general-purpose publish-subscribe system, nor is it intended for interprocess communication.

Interface-driven solution
Before dive into EventBus, let me show you how the typical interface-based notification system could look like. First we need to define a simple immutable object that should be transferred using event from producer to consumers (reused also in EventBus example):

public class Message {

    private final String sender;

    private final String subject;

    private final String text;

    public Message(String sender, String subject, String text) {
        this.sender = sender;
        this.subject = subject;
        this.text = text;
    }

    public String getSender() {
        return sender;
    }

    public String getSubject() {
        return subject;
    }

    public String getText() {
        return text;
    }
}

Than we need to create a listener and event that would carry an object of above Message type. Single listener must exists for each event (used very effective technique proposed by Laurent Simon when listener and event are bound to each other within event class):

public interface MessageRecievedEventListener {
    public void messageRecieved(Message msg);
}
public interface SystemEvent<L> {
	public void notify(L listener);
}
public class MessageRecievedEvent implements SystemEvent<MessageRecievedEventListener> {

    private final Message msg;

    public MessageRecievedEvent(Message msg) {
        this.msg = msg;
    }

    @Override
    public void notify(MessageRecievedEventListener listener) {
        listener.messageRecieved(msg);
    }
}

Now when we have an event and corresponding listener, we can create a consumer of above event:

public class MessageReceivedEventConsumer implements MessageRecievedEventListener {

    private static final Logger LOG = LoggerFactory.getLogger(MessageReceivedEventConsumer.class);

    @Override
    public void messageRecieved(Message msg) {
        LOG.info("messageRecieved(), msg: {}", msg);
    }
}

The only thing missing there is dispatcher. Dispatcher is a component responsible for registering consumers to particular event a firing events. After specific event is fired, all registered consumers will receive the exact event fired. In our implementation dispatcher is a black box since it knows nothing about specific event or listener, it works only with SystemEvent interface.

public class SystemEventBus {

    // ReentrantReadWriteLock could be used if synchronization has proven to be a bottleneck
    @SuppressWarnings("rawtypes")
    private final Multimap<Class, Object> eventBusListeners = Multimaps.synchronizedMultimap(HashMultimap.<Class, Object> create());

    public <L> void registerListener(Class<? extends SystemEvent<L>> eventClass, L listener) {
        eventBusListeners.put(eventClass, listener);
    }

    @SuppressWarnings("unchecked")
    public <L> void fireEvent(SystemEvent<L> event) {
        Collection<L> eventListeners = (Collection<L>) eventBusListeners.get(event.getClass());
        for (L listener : eventListeners) {
            event.notify(listener);
        }
    }
}

EDIT 08/2015: Sources including unit test are now available on GitHub.

Advantages:

  • static typing
  • dispatcher is a black box, it knows nothing about particular event or listener interface

Disadvantages:

  • necessary to create listener interface for each event
  • potential collision of method names in listener interfaces (when subscriber implements multiple listener interfaces)

Annotation-driven solution
On the other side, implementation of preceding example using EventBus is much easier. No specific interfaces are required. Listener has to only define public method, marked by Subscribe annotation with one parameter, the event that wants to capture.

public class MessageRecievedEvent {

    private final Message msg;

    public MessageRecievedEvent(Message msg) {
        this.msg = msg;
    }

    public Message getMsg() {
        return msg;
    }
}
public class MessageRecievedEventConsumer {

    private static final Logger LOG = LoggerFactory.getLogger(MessageRecievedEventConsumer.class);

    @Subscribe
    public void messageRecieved(MessageRecievedEvent e) {
        LOG.info("messageRecieved(), msg: {}", e.getMsg());
    }
}

Listener must be registered in EventBus instance in order to be notified when an event is fired.

Message msg = new Message("marlly", "Interface vs annotation driven events", "Post about differences between those event architectures");
MessageRecievedEvent msgEvent = new MessageRecievedEvent(msg);
new EventBus().post(msgEvent);

EDIT 08/2015: Sources including unit test are now available on GitHub.

Advantages:

  • less code
  • no specific listener interface for each event
  • can listen to event supertype and take advantage of inheritance
  • detects events that have attached no listeners

Disadvantages:

  • moot lack of static typing (register and post methods accept Object type as parameter)

Conclusion
If you are already utilizing Google Guava library and are looking for simple notification system, you should definitely use EventBus. For others just add Guava libray and use it too :). It’s really simple and effective way how to handle with events.

Non-ASCII file names in ZIP archive

Recently one of our clients reports a bug regarding usage of czech national characters in file names within ZIP archive. They just didn’t display correctly. After some analysis I discovered something I never believed is possible nowadays. Windows 7 has no native support for UTF-8 encoded file names in ZIP archive! Common Microsoft, it’s 2011 and support for UTF-8 file name characters is arround at least 5 years (officialy introduced in v6.3.0 of ZIP specification).

The whole problem with file names encoding lies in the fact that ZIP format uses by default IBM PC character encoding set also known as IBM Code Page 437, IBM437 or CP437. Unfortunately this code page restricts storing file name characters to only those within the original MS-DOS range so it’s quite limited. Therefore if you want to use most national characters in file names within ZIP, you have basically two options:

  • Use UTF-8 and set language encoding flag to instruct the processing tool, that characters in file names are encoded in UTF-8
  • Use whatever encoding that’s native to your specific target platform

First Option
With first option you can achieve the best interoperability among operating systems. Downside of this approach is that Windows users have to use some third-party application to handle ZIP archives because compressed folder doesn’t display UTF-8 characters correctly. All well-known ZIP processing tools I tried on Windows (WinZip, WinRAR, 7-Zip) were able to display UTF-8 encoded file names properly. 7-Zip on unix-based systems has also displayed such a file names correctly. Here is the Java code snippet that creates a ZIP archive containig two empty files with slovak national characters in each file name.

ZipArchiveOutputStream zipOut = new ZipArchiveOutputStream(new FileOutputStream("/tmp/utf8.zip"));
zipOut.setEncoding("UTF-8");
zipOut.setUseLanguageEncodingFlag(true);
zipOut.putArchiveEntry(new ZipArchiveEntry("1_ľščťžýáíé.txt"));
zipOut.closeArchiveEntry();
zipOut.putArchiveEntry(new ZipArchiveEntry("2_úäôňďúě.txt"));
zipOut.closeArchiveEntry();
zipOut.flush();
zipOut.close();

This example uses Apache Commons Compress library which allow to specify encoding and set language flag. If you are lucky and already using Java 7 released last month, you can utilize classes from java.util.zip package that obtained new constructor to set encoding. In addition, these classes use UTF-8 by default and read/write language encoding flag. On Java versions <= 1.6 just stay with commons-compress library.

Second Option
Second option is way to go when you address only one operating system using specific code page (that’s our customer case and approach I eventually employed). Suppose all your users use Windows with code page 852 (CP852, IBM852 – standard code page used by central european countries). In this case you can generate ZIP archive in almost the same way as above but this time set the encoding to CP852 and omit the encoding flag.

ZipArchiveOutputStream zipOut = new ZipArchiveOutputStream(new FileOutputStream("/tmp/cp852.zip"));
zipOut.setEncoding("CP852");
zipOut.putArchiveEntry(new ZipArchiveEntry("1_ľščťžýáíé.txt"));
zipOut.closeArchiveEntry();
zipOut.putArchiveEntry(new ZipArchiveEntry("2_úäôňďúě.txt"));
zipOut.closeArchiveEntry();
zipOut.flush();
zipOut.close();

Every tool on the platform using default code page 852 will display national characters from this ZIP file correctly, including Windows compressed folder tool. In order to find out what code page Windows currently uses simply navigate to the following node in registry:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Nls\CodePage

and look for a key with the name OECMP.

And remember, there is no such thing as universal, always-working approach to ZIP file names encoding.