Server Bug Fix: JBoss EAP 7, IO Subsystem workers configurations

Original Source Link

I am asking here because I cannot find much information on this…

What is the main use of the setting for “Io threads” and “Task max threads” under the IO subsystem workers configuration?

Everywhere I read, people are saying that the “Io threads” specify the number of concurrent requests that the server can handle, and that the “Task max threads” is the maximum number of concurrent requests that the server can handle.

So does this mean that if i set the “Io threads” to be 50, that means the JBoss can be handling 50 browser request concurrently?

I have a site where the requirement is to be able to serve 1500 concurrent user within a 15 second time frame. And each request should not take more then 3 seconds to complete. This includes downloading the html, js, css, and all the jpg files that the browser needs.

Does this mean that I need to set the “Io threads” to a higher number, like 100, and the “Task max threads” to 150?

I have tried setting “Task max threads” to 150, and even 250, and it seems to slow down my site.

Anyone can explain how these 2 settings work?

You don’t need to have a separate IO thread for each user connection usually. You may need task threads as many as concurrent users at any moment though to avoid slowdowns. You can try the defaults and see how they work for your application. See these support articles:

Note that EAP 7.2 has additional core threads configuration for efficiency.

Update: Given you want more comprehensive instructions, better check the full Red Hat Enterprise Application Platform performance tuning guide.

Update 2: Because EAP 7 is using Undertow, to understand what IO and worker thread do, undertow docs should give some clues:

Management of IO and Worker threads

The XNIO worker manages both the IO threads, and a thread pool that
can be used for blocking tasks. In general non-blocking handlers will
run from within an IO thread, while blocking tasks such as Servlet
invocations will be dispatched to the worker thread pool.

@IWantSimpleLife as @akostadinov referred to the documentation describes the design and how “blocking IO” work is separate to handler work. I think I’m correct in saying to answer your question the “IO threads” and “Task max threads” are not directly related. I say that because they refer to two separate thread groups. The first is “Worker IO threads” which describes the initial group of IO related, “read” and “write” handlers that listen to nio channels and events. Ideally only 1 thread in the IO pool and it must never execute blocking (network or file) processing. The other group is generally called “Worker Task Threads” group. Which is dedicated to executing blocking work. One of the settings of this second group will be “Task max threads” size.

Tagged : /

Code Bug Fix: CDI Bean method not called when using myfaces bundled in webapp and run on Wildfly

Original Source Link

I want to migrate my JSF Application from ManagedBean to CDI Beans.

First of all I have done a simple test to see if CDI are working, but they don’t.
Here my example, using Wildfly 10 and myfaces 2.2

beans.xml in .WEB-INF

<beans xmlns="http://xmlns.jcp.org/xml/ns/javaee"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee
        http://xmlns.jcp.org/xml/ns/javaee/beans_1_1.xsd"
       bean-discovery-mode="all">
</beans>

xhtml page:

<!DOCTYPE html>

<html xmlns="http://www.w3.org/1999/xhtml"
      xmlns:h="http://java.sun.com/jsf/html">

<h:head>
    <title>
        test
    </title>
</h:head>

<h:body>
    <h:outputText value="#{CDITest.hello}"/>
    <h:outputText value="test"/>
</h:body>

</html>

The backing Bean

import javax.enterprise.context.SessionScoped;
import javax.inject.Named;
import java.io.Serializable;

  @Named("CDITest")
  @SessionScoped
  public class CDITest implements Serializable{

    public String getHello(){
      return "Hello";
    }
}

The output

test

No error message (!) and no call to CDITest.getHello() method. What I’m missing?

The problem is more general.
In JSF 2.3, JSF picks up CDI via BeanManager#getELResolver. In pre JSF 2.3, the container or the CDI impl has to marry JSF and CDI.

I think you need to Declare a @FacesConfig annotated class to activate CDI in JSF 2.3

import javax.enterprise.context.ApplicationScoped;
import javax.faces.annotation.FacesConfig;

@FacesConfig
@ApplicationScoped
public class ConfigurationBean {

}

Upgrading to myfaces-2.3.6 (jsf 2.3) on Wildfly 19.0.0 solved the issue. Note that you need a @FacesConfig class as suggested by @Cristyan.

Note also that using Mojarra the issue didn’t happen and the Bean worked as expected (for both Widlfly 10 and Widlfly 19).

Tagged : / / /

Code Bug Fix: org.postgis.PGgeometry cannot be cast to org.postgis.PGgeometry

Original Source Link

I am migrating the connection mode of a project
from:
using DriverManager
to
DataDource.
But I got this error:

org.postgis.PGgeometry cannot be cast to org.postgis.PGgeometry

I try to put in and take out the Libray Postgis form the server and the Project but without results

Thank in advance for the help

Thank you very much mikedb!
Is true, the library is in two places:
1-pakaged in the project by maven
2-in the server wildfly
If i take out from the 1 the project doesn´t compile
If i take out from 2 at run time came out with the error:
ERROR [stderr] (default task-1) java.lang.ClassCastException: org.postgresql.util.PGobject cannot be cast to org.postgis.PGgeometry

It seems that can´t use the library pakaged (1) at certain point and need the one in the server (2)

This is almost certainly caused by having 2 jar/library files on the classpath with this class in it.

Check your classpath locations for Wildfly and make sure you only have one copy of the postgis library on the classpath.

If you don’t find the duplicate, check some more – you will find it eventually.

One way to do this is use the following code to locate the class (it will print the jar file the class is loaded from):

Class klass = String.class;
URL location = klass.getResource('/' + klass.getName().replace('.', '/') + ".class");

Then, remove that jar from your classpath and run the same code to get the next location until you locate/remove them all.

Then, add one back.

Set the maven scope to provided and it will compile but not package it if you are using a library packaged in Wildfly, just make sure you have the same version.

provided means your project needs it to compile/test/run but that it will be provided by the container you are running in (Wildfly, in your case).

See here for details: https://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html#Dependency_Scope

Tagged : / / /

Code Bug Fix: WildFly: HTTP method POST is not supported by this URL

Original Source Link

I’m developing a Java EE web application running on WildFly 18, and Angular on the front end. All the HTTP calls from Angular to Wildfly are POSTs. The application works fine, but once a month, when I start it, I cannot use it because Wildfly rejects the request saying that the HTTP method POST is not supported by this URL (see error below on browser console). Just to make sure is not Angular, I made the POST call from a Java program, and got the same error.

The solution is to close everything and restart, sometimes more than once. Why does this happen and how to fix this? The big problem is that this may happen in production.

visualcode/rest/getbropr:1 Failed to load resource: the server
responded with a status of 405 (Method Not Allowed) main.js:1127
HttpErrorResponse error:
“Error HTTP method POST is not
supported by this URL” headers: HttpHeaders
{normalizedNames: Map(0), lazyUpdate: null, lazyInit: ƒ} message:
“Http failure response for
http://localhost:4400/visualcode/rest/getbropr: 405 Method Not
Allowed” name: “HttpErrorResponse” ok: false status: 405 statusText:
“Method Not Allowed” url:
http://localhost:4400/visualcode/rest/getbropr

Tagged : / / /

Server Bug Fix: wildfly and logrotate: wildfly still logs messages to already rotated server.log

Original Source Link

For wildfly (on linux) I need following logging scenario: daily rotation of server.log and removing old log files which are older than 90 days. I don’t see a way to configure this in wildfly/log4j (the problem is here to remove old log files, but I will be happy for tips to do this directly with wildfly configuration). So I have to use linux logrotate for this. I have following logrotate configuration file:

/var/log/wildfly/capp/*.log {
    missingok
    daily
    notifempty
    rotate 90
    maxage 90
    dateext
    dateformat -%Y%m%d
}

The server.log will be rotated successfully in the early morning. But: wildfly is still writing the log messages into the already logrotated file (see at the timestamps of the last write access):

#> ls -la
-rw-r--r-- 1 wildfly-capp wildfly-capp      0  4. Apr 03:40 server.log
-rw-r--r-- 1 wildfly-capp wildfly-capp 368909  4. Apr 07:00 server.log-20190404

Is there a way to force wildfly to use the server.log file instead the already rotated file (without restarting wildfly)? Or is it possible to change the wildfly logging configuration to remove logging files which are older than x days?

The wildfly logging configuration is:

<subsystem xmlns="urn:jboss:domain:logging:5.0">
        <console-handler name="CONSOLE">
            <level name="INFO"/>
            <formatter>
                <named-formatter name="COLOR-PATTERN"/>
            </formatter>
        </console-handler>
        <file-handler name="FILE" autoflush="true">
            <formatter>
                <named-formatter name="PATTERN"/>
            </formatter>
            <file relative-to="jboss.server.log.dir" path="server.log"/>
            <append value="true"/>
        </file-handler>
        <logger category="com.arjuna">
            <level name="WARN"/>
        </logger>
        <logger category="org.jboss.as.config">
            <level name="DEBUG"/>
        </logger>
        <logger category="sun.rmi">
            <level name="WARN"/>
        </logger>
        <root-logger>
            <level name="INFO"/>
            <handlers>
                <handler name="CONSOLE"/>
                <handler name="FILE"/>
            </handlers>
        </root-logger>
        <formatter name="PATTERN">
            <pattern-formatter pattern="%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n"/>
        </formatter>
        <formatter name="COLOR-PATTERN">
            <pattern-formatter pattern="%K{level}%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n"/>
        </formatter>
</subsystem>

After you rotate logs, you need to tell wildfly that logs are rotated and that it should start to write into new log file. Usually, that is done with HUP signal sent to the daemon, or you can just restart it. Otherwise, daemon will keep the filehandle of open file and it will write in the old file. That is done by adding postrotate section telling what needs to be done after logs are rotated. Take a look at examples for postrotate section in logrotate config file. Here are some examples from my computer for ufw and samba:

postrotate
    invoke-rc.d rsyslog rotate >/dev/null 2>&1 || true
endscript


postrotate
    if [ -d /run/systemd/system ] && command systemctl >/dev/null 2>&1 && systemctl is-active --quiet samba-ad-dc; then
        systemctl kill --kill-who all --signal=SIGHUP samba-ad-dc
    elif [ -f /var/run/samba/samba.pid ]; then
        # This only sends to main pid, See #803924
        kill -HUP `cat /var/run/samba/samba.pid`
    fi
endscript

For wildfly you will have to write your own command(s) to make wildfly reopen log files.

The point is that I don’t want restarting wildfly. Thanks to “nobody” for his answer. But now I know logrotate is not right thing for my case.

A workaround is using the “size-rotating-file-handler” (instead of “file-handler”) in wildfly. To switch to the size rotating handler I have to edit the standalone.xml and replacing the <file-handler name="FILE" autoflush="true"> section of <subsystem xmlns="urn:jboss:domain:logging:5.0"> with:

<size-rotating-file-handler name="FILE" autoflush="true">
            <formatter>
                <named-formatter name="PATTERN"/>
            </formatter>
            <file relative-to="jboss.server.log.dir" path="server.log"/>
            <append value="true"/>
            <rotate-size value="1M"/>
            <max-backup-index value="10"/>
</size-rotating-file-handler>

This will rotate the server.log file after the log file is greater than 1MB and will keep only 10 log backup files. All older log files will be removed.

I think if you want have a daily rotation log file appender which removes log files older than x days, you must write a own DailyRotateFileAppender which can also removes old log files. Then you must integrate your new file appender into wildfly (wildfly must find the class file and the standalone.xml configuration must be changed so that wildfly will be used the new file appender). I think this should work. For me, however, the time required to do this is too large …

Can you combine logrotate and JBOSS options?
For standalone configuration change logging.properties:

logger.handlers=FILE
handler.log_rotation.suffix=-yyyyMMdd

And only use logrotate for files with an - in the filename:

/var/log/wildfly/capp/*.log-* {

Tagged : / /

Code Bug Fix: Setting up ssl certificate on Wildfly 19 using letencrypt certificate

Original Source Link

I am trying to implement the SSL certificate on the wildfly 19.0.0-Final, running on CentOS
centos-release-7-7.1908.0.el7.centos.x86_64 with Java openjdk version “1.8.0_242”
OpenJDK Runtime Environment (build 1.8.0_242-b08)
OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)

I have performed the following steps to map say https://www.example.com domain to my wildfly content payslip

I have my keystore at the following location :
/opt/wildfly-19.0.0.Final/standalone/configuration/www.example.com.jks

Adding certificate to server.
http://www.mastertheboss.com/jboss-server/jboss-security/complete-tutorial-for-configuring-ssl-https-on-wildfly

Login to management console at sh /opt/wildfly-19.0.0.Final/bin/jboss-cli.sh
Connect

Then run the following script

batch
# Configure Server Keystore
/subsystem=elytron/key-store=demoKeyStore:add(path=server.keystore,relative-to=jboss.server.config.dir, credential-reference={clear-text=secret},type=JKS)
# Server Keystore credentials  
/subsystem=elytron/key-manager=demoKeyManager:add(key-store=demoKeyStore,credential-reference={clear-text=secret})
# Server keystore Protocols  
/subsystem=elytron/server-ssl-context=demoSSLContext:add(key-manager=demoKeyManager,protocols=["TLSv1.2"]) 
# This is only needed if WildFly uses by default the Legacy security realm
/subsystem=undertow/server=default-server/https-listener=https:undefine-attribute(name=security-realm)
# Store SSL Context information in undertow
/subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=ssl-context,value=demoSSLContext)

run-batch

reload

Now it will add a tls section to configuration file

Which will look like

<tls>
    <key-stores>
        <key-store name="demoKeyStore">
        <credential-reference clear-text="secret"/>
        <implementation type="JKS"/>
        <file path="server.keystore" relative-to="jboss.server.config.dir"/>
        </key-store>
    </key-stores>
    <key-managers>
        <key-manager name="demoKeyManager" key-store="demoKeyStore">
        <credential-reference clear-text="secret"/>
        </key-manager>
    </key-managers>
    <server-ssl-contexts>
        <server-ssl-context name="demoSSLContext" protocols="TLSv1.2" key-manager="demoKeyManager"/>
    </server-ssl-contexts>
</tls>

Stop wildfly to start making changes to config.
/usr/sbin/wildfly-19.0.0.Final stop
Stopping wildfly:
Change it to

 <tls>
                <key-stores>
                    <key-store name="demoKeyStore">
                        <credential-reference clear-text="Some1pwD"/>
                        <implementation type="JKS"/>
                        <file path="www.example.com.jks" relative-to="jboss.server.config.dir"/>
                    </key-store>
                </key-stores>
                <key-managers>
                    <key-manager name="demoKeyManager" key-store="demoKeyStore">
                        <credential-reference clear-text="Some1pwD"/>
                    </key-manager>
                </key-managers>
                <server-ssl-contexts>
                    <server-ssl-context name="demoSSLContext" protocols="TLSv1.2" key-manager="demoKeyManager"/>
                </server-ssl-contexts>
            </tls>

/usr/sbin/wildfly-19.0.0.Final start

I am unable to access the wildfly on https://www.example.com while http://www.example.com is working

It’s possible to obtain certificates from Let’s Encrypt using the WildFly CLI. Take a look at the following blog post that describes how to do this:

https://developer.jboss.org/people/fjuma/blog/2018/08/31/obtaining-certificates-from-lets-encrypt-using-the-wildfly-cli

There’s also additional documentation in Section 4.3.6 here:

https://docs.wildfly.org/19/WildFly_Elytron_Security.html#configure-ssltls

Note that to make use of a new certificate without needing to restart the server, you just need to re-initialize your key-manager (e.g., /subsystem=elytron/key-manager=httpsKM:init()).

Tagged : / / / /

Code Bug Fix: Accessing EJBs from pojo class==>org.jboss.remote-naming

Original Source Link

I have deployed Test.jar over wildfly server at path – wildfly-10.1.0.Final/standalone/deployments.
Test.jar contains java file Test.java. ‘main’ method is present in Test.java file. Test.java file is accessing a EJB class named as ‘TestBean.java’.
MANIFEST.MF(in META-INF directory) is present in Test.jar.

Manifest-Version: 1.0
Created-By: 1.8.0_66 (Oracle Corporation)
Main-Class: Test

‘jboss-deployment-structure.xml’ is also present in META-INF directory.

<?xml version="1.0"?>
<jboss-deployment-structure xmlns="urn:jboss:deployment-structure:1.1">
<deployment>
    <dependencies>              
        <module name="org.jboss.remote-naming" export="true" />        
    </dependencies>
</deployment>

but i am still getting error while accessing Test.java class.

javax.naming.NoInitialContextException: Cannot instantiate class: org.jboss.naming.remote.client.InitialContextFactory [Root exception is java.lang.ClassNotFoundException: org.jboss.naming.remote.client.InitialContextFactory]
at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:674)
at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:313)
at javax.naming.InitialContext.init(InitialContext.java:244)
at javax.naming.InitialContext.<init>(InitialContext.java:216)
at getSessionInterfaceObject(Test.java:21)
Caused by: java.lang.ClassNotFoundException: org.jboss.naming.remote.client.InitialContextFactory
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at com.sun.naming.internal.VersionHelper12.loadClass(VersionHelper12.java:72)
at com.sun.naming.internal.VersionHelper12.loadClass(VersionHelper12.java:61)
at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:672)
... 8 more

i am accessing Test.java using command ‘java -jar Test.jar’.

Test.java contains following code for creation of context object.

public static Object  getSessionInterfaceObject(String ipAddress, String jndiLookupName) throws NamingException{        
    Object interfaceObject = null;
    Context ejbRootNamingContext = null;
    Context context = null;
    try
      {
        String EJBUSER = "EjbUser";
        String EJBPASS = "ejbuser123";
        String PORT = "8080";
Hashtable jndiProps = new Hashtable();
        jndiProps.put("java.naming.factory.initial", "org.jboss.naming.remote.client.InitialContextFactory");
        jndiProps.put("java.naming.provider.url", "http-remoting://" + ipAddress + ":" + PORT);
        jndiProps.put("jboss.naming.client.ejb.context", Boolean.valueOf(false));
        jndiProps.put("org.jboss.ejb.client.scoped.context", Boolean.valueOf(true));
        jndiProps.put("java.naming.factory.url.pkgs", "org.jboss.ejb.client.naming");
        jndiProps.put("endpoint.name", "client-endpoint");
        jndiProps.put("remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED", Boolean.valueOf(false));
        jndiProps.put("remote.connections", "default");
        jndiProps.put("remote.connection.default.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS", Boolean.valueOf(false));
        jndiProps.put("remote.connection.default.host", ipAddress);
        jndiProps.put("remote.connection.default.port", PORT);
        jndiProps.put("remote.connection.default.username", EJBUSER);
        jndiProps.put("remote.connection.default.password", EJBPASS);
        context = new InitialContext(jndiProps);
        ejbRootNamingContext = (Context)context.lookup("ejb:");
        interfaceObject = ejbRootNamingContext.lookup(jndiLookupName);
      }           
      catch (Exception ex)
      {
          ex.printStackTrace();
      }
return interfaceObject;
}

Could someone help me understand why am I getting that error?

Tagged : / /

Code Bug Fix: Java Spring – Disable any logging sent to server.log in Wildfly 11

Original Source Link

I have a java spring REST web service. This webservice interacts with a database, and I had configured the logging with the Wildfly console, to send error messages and everything about this war into a myapp.log file.

I have this kind of errors sent to myapp.log file (which is expected):

22:28:09.027 [default task-1] [org.myapp.testapp] ERROR p.c.p.h.controller.TestController- ERROR: duplicate key value violates unique constraint "my_constraint"

Using in my java code:

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

...

 static final Logger logger = LoggerFactory.getLogger(TestController.class);

and sending the error after a exception. As I said, I get this kind of error messages in my myapp.log file, but also I get the equivalent error in server.log:

 22:52:08,507 WARN  [org.hibernate.engine.jdbc.spi.SqlExceptionHelper] (default task-1) SQL Error: 0, SQLState: 23505
 22:52:08,508 ERROR [org.hibernate.engine.jdbc.spi.SqlExceptionHelper] (default task-1) ERROR: duplicate key value violates unique constraint "my_constraint"
Detail: Key (myvalue)=(Avalue) already exists.

How can I configure my java project to disable those messages to server.log? I want that myapp.log be the only log file for my application

You should define the proper logger for that class (org.hibernate) in your log4j.properties file and emit them or forward them appropriately to corresponding appender

Tagged : / / / /