Technical Interview Questions and Answers :: WebMethod
The recommendations made here are generalized and suitable for most uses, though it will be important to verify for each environment that the chosen settings are appropriate by adequately testing the performance and behavior profile.
If you are not using trigger retries then set the retry count to 0. This will noticeably improve performance, especially as documents get larger and more complex.
Trigger Processing Mode#
Serial processing mode is used to enforce document order on consumption. In a single instance environment, the order of processing is the order in the queue. In a clustered environment, the order of processing is based on publisher order i.e. an instance acquires ownership for documents from one source and then exclusively processes these in a single threaded fashion the order they appear in the queue. Other sources may be processed by other IS instances in the cluster. For most general purposes, the processing mode will be set to concurrent and this gives far better performance.
Trigger Processing Mode = Concurrent, assuming order of processing is not important
The number of threads should generally be no more than a small multiple of the number of CPU cores available to the IS, also considering that all service threads within the Integration Server must share CPU resources. The number of threads may be increased further where the work done in the service has a relatively low CPU content, for example where there is a lot of IO involved, or where the service thread is waiting for external applications or resources. Setting trigger threads too high will start to incur context-switching overheads at the OS level and within the JVM.
Trigger Threads = 4 x CPU, except where order of processing is important and Serial processing mode is use
The amount of work each thread must do and, not just for one trigger but for all thread consumers. If the trigger service is very short and lightweight then it can support more threads than more computationally expensive threads. Document size will play a factor but itâ€™s only one reason that threads become computationally expensive. Review all the triggers in the context of the whole system and not just the single trigger.
Trigger Cache Size and Refill Level#
The trigger cache size defines the number of documents that may be held in memory while documents are unacknowledged on the broker. The cache is filled with documents (in batches of up to 160 at a time) from the Broker, so a larger cache size reduces the number of read activities performed on the Broker. The IS goes back to the Broker for more documents when the documents left in the cache falls below the Refill Level. The objective in setting these parameters is to ensure that whenever a trigger thread becomes available for use, there is a document already in the cache. The Cache Size should be as small as it can be whilst still being effective, to minimize the use of memory in the IS (note the size is specified in documents, not based on total size held). If the processing of documents is generally very short, the cache should be larger. As a rough guide, the cache size may be 5 to 10 times the number of trigger threads, and the refill level 30%-40% of that value (or the refill should be twice the number of trigger threads).
Trigger Cache Size = 5 x Trigger Threads Trigger Refill Level = 2 x Trigger Threads Trigger Cache Memory Usage = Trigger Cache Size x Average Document Size
For small documents with lightweight services these setting could be too conservative and for large documents it could be too aggressive.
Acknowledgement Queue Size#
The AckQ is used to collect acknowledgements for documents processed by the trigger threads when they complete. If set to a size of one, then the trigger thread waits for the acknowledgement to be received by the Broker before it completes. If the AckQ size is greater than one, then the trigger thread places the acknowledgement in the AckQ and exits immediately. A separate acknowledging thread polls the AckQ periodically to write acknowledgements to the broker. If the AckQ reaches capacity then it is immediately written out to the broker, with any trigger threads waiting to complete while this operation is done. Setting the AckQ size greater than one enables the queue, and reduces the wait time in the trigger threads. If performance is important, then the AckQ should be set to a size of one to two times the number of trigger threads. Acknowledgements only affect guaranteed document types. Volatile documents are acknowledged automatically upon reading them from the Broker into the Trigger Cache.
Acknowledgement Queue Size = 2 x Trigger Threads
The potential caveat to this setting is the number of documents that might need to be reprocessed in the event of a server crash.
Volatile documents are handled entirely in memory and so the quality of storage is propagated into the handling in the IS as well. Loss of memory results in loss of a volatile document whether it is held by the Broker or by the IS. This is also why acknowledgements are returned to the Broker upon reading a volatile document.
For guaranteed messages, in-memory storage about the state of a message can exist in both the Trigger Cache and in the Acknowledgement Queue. If the IS terminates abnormally, then this state is lost. However, for unacknowledged, guaranteed documents, the redelivery flag will always be set on the Broker as soon as the document is accessed by the IS. Therefore after an abrupt IS termination or disconnection, the unacknowledged documents will be presented either to the same IS upon restart, or once the Broker determines that the IS has lost its session, to another IS in the same cluster.
All these documents will have the redelivery flag set and may be managed using the duplicate detection features, described in the Pub/Sub User Guide.
In such a failure scenario, the number of possible unacknowledged messages will be a worst case of Trigger Cache Size plus Acknowledgement Queue Size. The number of documents that had completed processing but were not acknowledged will be a worst case of Trigger Threads plus Acknowledgement Queue Size. The number of documents that were part way through processing but hadn't completed will be a worst case of Trigger Threads. The number of documents that will have the redelivery flag set but had actually undergone no processing at all will be a worst case of Trigger Cache Size.
If the trigger is subscribing to multiple document types (has multiple subscription conditions defined), then the trigger threads are shared by all document types. This may give rise to variations in the processing required for each message and the size of each message in the cache. Where this complicates the situation, it is better to use one condition per trigger.
If document joins are being used, refer to the user guide for information about setting join timeouts. A trigger thread is only consumed when the join is completed and the document(s) are passed to the service for processing.
Block timeout: refers to how much time the IS should wait to get connection from connection pool before throwing exception.
Expire Timeout: refers to how much time the free connection stay as it is before it expires.
JDBC Adapter Connection Pools (or "connection pooling") refers to the capability to concurrently open and use several connections to a given database instance for the sake of performance
In Disabled: Database trigger and buffer table are dropped. Document discarded.
No Discussion on this question yet!
Insert Adapter Notification is used to retrieve the insert data from the buffer table and publish for the trigger to use it. Like others, it is also polling based. Point is the trigger and buffer table is created automatically and dropped by IS when the notification is created and disabled (not suspended).
In basic, you create the trigger and buffer table. So the trigger and buffer table is not created and deleted automatically when notification is disabled .
A canonical document is standardized representation that a document might assume while it is passing throughthe webMethod Integration platform. A canonical document act as the intermediary data format between resources.
Dispatcher will check if the broker is up or not. If not , then it will check the document storage type and if the storage type is guaranteed then it will be storing the document to outbound document store and if document type is volatile then document will be discarded and exception will be thrown.
No Discussion on this question yet!
Deployer is use for deployement of packages, services, subservices from one environment to another environment by taking care of all the dependecy or depenedent services. e.g. Test environment to Production Environment. Four steps involved in deployement of Package - Create, build, Map, Deploy.
Your responses will be stronger if you prepare well in advance for Webmethod Interview Questions, get most frequently asked top Basic and Advanced Webmethods interview questions and answers with detailed explanation.
This section covers IBM Webmethods Interveiw Questions Webmethods Interview Questions IBM for freshers and experienced. It helps job seekers who are about to attend Interview for Webmethod Profile.
You can Compile a list of the Top/Favorite/Frequently asked Interview Questions/Answers and add to your "Reading List" and refer your Reading list questions whenever you want.