Sometimes, we as computer users, look for answers on the internet. As such, we can find it quite frustrating for someone to post a query looking for help on the exact same problem that we are having and do not post a follow up answer. My intention is to make available, via this blog, my answers to questions and issues that I find and have been able to resolve in the hopes of helping someone else.
Tuesday, July 21, 2020
SAP Crystal Reports trial key for older releases
While SAP doesn't come out and say it, it is possible to obtain a trial key for older releases where needed by registering an account with them in order to obtain a Crystal Reports 2016 keycode which will work with an older release such as what I deployed which was 2011 SP12 which was end of life, and end of support. This particular version was required to maintain compatibility with a software package that a client is using.
In summary, as of the time of this writing, a new trial key will work with older versions, at least as far back as 2011 SP12.
SSRS native does not bind TCP ports configured
Should you be finding that an installation of Microsoft SQL Server Reporting Services (SSRS) is not binding to the configured TCP ports, the best place to look is in the log files.
A default installation will be located at the following location: C:\Program Files\Microsoft SQL Server Reporting Services\SSRS\LogFiles.
The log files in particular are named RSHostingService_* and you'll then likely be able to isolate the cause of the problem.
In my case, it was a matter of installing SSRS after SQL Server that was purchased through volume license where product keys are not provided through the web site. The product key for SQL server purchased and downloaded via this manner is actually available on the ISO image in the root folder in a file named Default.ini.
Use this product key to register SSRS. Note that if you've previously installed SSRS in trial mode, all of the enterprise features are enabled. If applying a server standard license, SSRS will complain loudly about various things, such as the Scale-Out not being supported which is what I was presented with.
I hope that this helps someone else that forgot to license their SSRS installation and cannot figure out why it doesn't work after the evaluation period expires.
Saturday, January 14, 2012
Castleville assets not fully loading
I've been playing Zynga's Castleville on facebook lately and have recently lost certain game assets. The game loads fine when I'm taking a break at work. At home, however, it doesn't appear to load all of the game assets (e.g. Beastie and farm).
After verifying all of the typical things, latest version of the Adobe Flash Player, clearing my cache, etc. I was still experiencing the same problem. This has been going on for almost a week now and I suspected that it had something to do my internet connection being shaped as the problem began near enough to that happening.
Wanting to have a play today, I began thinking about differences between my work Firefox (v9) and my home version. It occurred to me that apart from the number of tabs and their content, the TorButton extension was the only difference. Into the Add-ons dialog I go, remove the extension (as I rarely use it) and restart Firefox. I log back into facebook and hit Castleville; wait for the game to finish loading and presto, everything is now as it should be.
Monday, June 27, 2011
VMware Server reconfiguration required on boot
Under a Debian 5.0 installation, VMware Server 2.0.2-203138 is currently installed.
The issue being experienced is that every time the server is restarted, vmware-config.pl needs to be ran so that the service may successfully start.
This appears to be a fairly common problem, however, there did not appear to be a magic bullet solution that actually corrected the issue.
Further investigation suggested that during installation, VMware creates the device nodes in /dev and expects them to be present on subsequent system starts. The issue with this is that with udev, the device nodes are dynamically created and thus the VMware nodes are lost. While the /etc/init.d/vmware script suggests that the nodes are created, they aren't from my experience.
One solution to the problem is to force the creation of the nodes required within /etc/init.d/vmware within the service_vmware start function as such:
That solved the device nodes problem for me.
However, another issue was still present that required vmware-config.pl to be ran on server restarts and this was that /etc/vmware/not_configured was now present.
During the start up process for VMware, an exitcode variable is built up from the running and/or starting of various programs required by VMware to operate.
If this variable is anything other than zero and the VMware product is not VMware Workstation, the not_configured file is generated. The impact of this file being present on the next attempted starting of the service results in an exiting of the script from the check_configured function complete with and advising note that VMware has not been configured and that vmware-config.pl needs to be ran to resolve the problem.
Running vmware-config.pl does in fact solve the problem as expected, however, as previously mentioned, the requirements do not stick.
The additional requirement for me to overcome this problem was to comment out the exitcode check performed by the script.
Just below the section added above within the service_vmware start function, you'll need to comment out this check as shown below:
While this is not the most ideal solution for some, it solved the problem for me and I hope that some of this information may help others.
The issue being experienced is that every time the server is restarted, vmware-config.pl needs to be ran so that the service may successfully start.
This appears to be a fairly common problem, however, there did not appear to be a magic bullet solution that actually corrected the issue.
Further investigation suggested that during installation, VMware creates the device nodes in /dev and expects them to be present on subsequent system starts. The issue with this is that with udev, the device nodes are dynamically created and thus the VMware nodes are lost. While the /etc/init.d/vmware script suggests that the nodes are created, they aren't from my experience.
One solution to the problem is to force the creation of the nodes required within /etc/init.d/vmware within the service_vmware start function as such:
service_vmware() { # See how we were called. case "$1" in start) if vmware_inVM; then # Refuse to start services in a VM: they are useless exit 1 fi # UDEV fix follows if [ ! -e "/dev/vmmon" ]; then mknod /dev/vmmon c 10 165 2>/dev/null; chmod 600 /dev/vmmon fi if [ ! -s "/dev/vmci" ]; then mknod /dev/vmci c 10 60 2>/dev/null; chmod 666 /dev/vmci fi for a in `seq 0 9`; do if [ ! -e "/dev/vmnet$a" ]; then mknod /dev/vmnet$a c 119 $a 2>/dev/null; chmod 600 /dev/vmnet$a fi done
That solved the device nodes problem for me.
However, another issue was still present that required vmware-config.pl to be ran on server restarts and this was that /etc/vmware/not_configured was now present.
During the start up process for VMware, an exitcode variable is built up from the running and/or starting of various programs required by VMware to operate.
If this variable is anything other than zero and the VMware product is not VMware Workstation, the not_configured file is generated. The impact of this file being present on the next attempted starting of the service results in an exiting of the script from the check_configured function complete with and advising note that VMware has not been configured and that vmware-config.pl needs to be ran to resolve the problem.
Running vmware-config.pl does in fact solve the problem as expected, however, as previously mentioned, the requirements do not stick.
The additional requirement for me to overcome this problem was to comment out the exitcode check performed by the script.
Just below the section added above within the service_vmware start function, you'll need to comment out this check as shown below:
# if [ "$exitcode" -gt 0 -a `vmware_product` != "ws" ]; then # # Set the 'not configured' flag # touch "$vmware_etc_dir"'/not_configured' # chmod 644 "$vmware_etc_dir"'/not_configured' # db_add_file "$vmware_db" "$vmware_etc_dir"'/not_configured' \ # "$vmware_etc_dir"'/not_configured' # exit 1 # fi
While this is not the most ideal solution for some, it solved the problem for me and I hope that some of this information may help others.
Saturday, April 16, 2011
Gaining access to underlying log levels that are not exposed in log4net
Within log4net, the ILog interface exposes five general log levels that are typically used within an application and they are:
There are a few ways of achieving this goal and what follows will be what I believe to be the most simplistic method possible using the technology available to us at this point in time. The alternative methods, I will leave up to you, the reader, to research for yourself as they are not the target of this article, nor is it the intention to compare the alternatives with what has been chosen to implement.
We are going to use extension methods to achieve this goal.
As can be seen, the extension method class is fairly straight forward and easy to understand.
What may not apply for you, the reader, is the null check within each of the extension methods (see comments about NullLog).
The null checks are there because an NullLog ILog instance that is used within unit tests where the FINEST level of logging is not utilized within any assertions. If you have no use for such checks, feel free to remove them.
I hope that this snipped of code makes someones life easier. Happy hacking!
- Debug
- Error
- Fatal
- Info
- Warn
There are a few ways of achieving this goal and what follows will be what I believe to be the most simplistic method possible using the technology available to us at this point in time. The alternative methods, I will leave up to you, the reader, to research for yourself as they are not the target of this article, nor is it the intention to compare the alternatives with what has been chosen to implement.
We are going to use extension methods to achieve this goal.
/// <summary>
/// Extension methods for <see cref="ILog"/> implementations that provide the callers access to the <see cref="Level.Finest"/> logging level.
/// </summary>
public static class LogExtensions
{
private static readonly Type DeclaringType = typeof(LogExtensions);
/// <summary>
/// Log a message object with the <see cref="Level.Finest"/> level
/// </summary>
/// <param name="log">The ILog interface is use by application to log messages into the log4net framework.</param>
/// <param name="message">The message object to log.</param>
public static void Finest( this ILog log, object message )
{
// NullLog returns null for 'Logger' property.
if ( log.Logger == null )
return;
log.Logger.Log( DeclaringType, Level.Finest, message, null );
}
/// <summary>
/// Log a message object with the <see cref="Level.Finest"/> level
/// </summary>
/// <param name="log">The ILog interface is use by application to log messages into the log4net framework.</param>
/// <param name="message">The message object to log.</param>
/// <param name="exception">The exception to log, including its stack trace.</param>
public static void Finest( this ILog log, object message, Exception exception )
{
// NullLog returns null for 'Logger' property.
if ( log.Logger == null )
return;
log.Logger.Log( DeclaringType, Level.Finest, message, exception );
}
/// <summary>
/// Logs a formatted message string with the <see cref="Level.Finest"/> level.
/// </summary>
/// <param name="log">The ILog interface is use by application to log messages into the log4net framework.</param>
/// <param name="format">A String containing zero or more format items</param>
/// <param name="args">An Object array containing zero or more objects to format</param>
public static void FinestFormat( this ILog log, string format, params object [] args )
{
// NullLog returns null for 'Logger' property.
if ( log.Logger == null )
return;
if ( log.IsFinestEnabled() )
log.Logger.Log( DeclaringType, Level.Finest,
new SystemStringFormat( CultureInfo.InvariantCulture, format, args ), null );
}
/// <summary>
/// Checks if this logger is enabled for the <c>Finest</c> level.
/// </summary>
/// <param name="log">The logger instance to check.</param>
/// <returns>
/// <c>true</c> if this logger is enabled for <c>Finest</c> events, <c>false</c> otherwise.
/// </returns>
public static bool IsFinestEnabled( this ILog log )
{
// NullLog returns null for 'Logger' property.
if ( log.Logger == null )
return false;
return log.Logger.IsEnabledFor( Level.Finest );
}
}
As can be seen, the extension method class is fairly straight forward and easy to understand.
What may not apply for you, the reader, is the null check within each of the extension methods (see comments about NullLog).
The null checks are there because an NullLog ILog instance that is used within unit tests where the FINEST level of logging is not utilized within any assertions. If you have no use for such checks, feel free to remove them.
I hope that this snipped of code makes someones life easier. Happy hacking!
Monday, February 7, 2011
Samsung Kies fails to connect to device
I recently updated Samsung Kies to version 2.0.0.11014_49 and the software was no longer able to connection to my Samsung Galaxy S running Android 2.2.
The error message displayed was that CEUTIL.DLL could not be found.
The solution was to locate the CEUTIL.DLL and RAPI.DLL files:
C:\Program Files (x86)\Common Files\Samsung\DeviceService
They are called _ceutil.dll and _rapi.dll.
Copy them to your Samsung Kies directory:
C:\Program Files (x86)\Samsung\Kies
Now rename them to remove the underscore from the beginning of the file name.
Restart Samsung Kies and you should now be able to connect your device.
The error message displayed was that CEUTIL.DLL could not be found.
The solution was to locate the CEUTIL.DLL and RAPI.DLL files:
C:\Program Files (x86)\Common Files\Samsung\DeviceService
They are called _ceutil.dll and _rapi.dll.
Copy them to your Samsung Kies directory:
C:\Program Files (x86)\Samsung\Kies
Now rename them to remove the underscore from the beginning of the file name.
Restart Samsung Kies and you should now be able to connect your device.
Sunday, January 30, 2011
Google Android Camera Preview Data
Recently I was working on a project at work that involved augmented reality that specifically targeted the mining industry.
One of the requirements for this project was for portable devices to be used for the augmented reality.
We had already developed a proof of concept application for the Apple iPhone running iOS. This worked fine for the most part and there were few hiccups in the process.
We also wanted to target Google Android operating system devices as well. To do this, we needed to experiment with that platform.
Initially we chose to stick with something that we knew fairly well for platform development and that was the world of .NET which most of our products, Winforms and Web, leveraged.
The developer tried to get MonoDroid to utilize the picture preview handler to obtain data from the device which was in this instance, a mobile phone with two cameras, front and back. He'd run into problems before going on holidays.
With work being somewhat slow this time of year for us, I was tasked with experimenting with the native Java API for Android to see what I could get cooking. If all worked well, once the other developer returned from his holidays, I could just hand over the native implementation and let him sort out the MonoDroid side of things in .NET land.
I've had my personal Samsung Galaxy S for almost six months now and have also had Eclipse with the Android SDK installed on my development machine at home. I had yet to do anything native with the device at this point. This was going to be a learning experience that I was quite looking forward to; getting paid to do something that I love doing, writing code and learning!
I jumped into Eclipse and updated the Android SDK environment to begin the process. Having never written a line of Java in the past 25 years or so of software development, I was quite looking forward to what I was starting.
Google was my first port of call for a simple "hello world" Android native Java application. I'd found a couple I was happy with and began to understand the flow of things; I'm a quick study of new languages.
Now that a basic application was building and deploying to the Android simulator environment that I had set up, the next logical step for me was to get the application to display live preview images from the rear facing camera within the same Android simulator environment.
A bit of research suggests that this methodology is in fact supported. So I dive into the API documentation on the Android site and begin to cobble together something resembling what I think will work. No surprise here, ran into a couple issues with a new language construct and have to rewire my thinking process. A few iterations and deployments to the Android simulator environment later and I finally have live simulated video from the camera being displayed within the application.
Now..onto the more difficult task of grabbing that video and saving out images as they're taken in near real time (as much as is feasible which I'll cover later).
I hook into the preview call back for the camera. This gets me the camera instance and the raw byte array of data that is fed from the camera. Excellent, this may be easier than I think!
Wrong! Android has a weird way of working in that you can't just set something without knowing what the capabilities of the something you are setting are. The camera is one of these somethings.
I believed that I sorted that out only to find out that image formats across vendor implementation and deployments of the Android operating system were not consistent. *sigh* I've read that before recently in an article from a software developer that produced games. Fine, I'll find the image format, which ended up being YUV and write my own decoder.
Well, seems this has been done before and I had just reinvented the wheel. Go me! I look into the specifics of the version of Android that our development devices had as well as my own personal phone. Jackpot! They've got 2.2update1 or better! This is a wonderful thing because these versions support YUV via the YuvImage class which can be converted to another format that I am interested in which is JPEG.
A bit more hacking on the code and I finally have the application now grabbing data from the preview call back for the camera and successfully saving out that data to what should be JPEG files at a hard coded location within the file system.
A quick reconfiguration of the physical test device to gain access to the internal storage so that the images can be examined and previewed from the development machine results in a positive and expected outcome. I now have JPEG images that I can do something with.
With my task now complete, I rejoice by telling others in the office of how things went and how pleased I was with the results and the learning experience of a new language and platform that I hopefully will continue on with as time goes on.
The target of the experiment wasn't so much the saving of files, it was a convenient way of verifying results. When the developer returned from holidays, he was able to take the image byte array from the device and stream that from the device to a connected client browser via HTTP.
Mission accomplished.
The purpose of this article was to share an experience as well as some code that others may benefit from to help them learn from my experiences as I had from others that inspired the completeness of this application. I hope that I have achieved that purpose.
The code for the application follows:
One of the requirements for this project was for portable devices to be used for the augmented reality.
We had already developed a proof of concept application for the Apple iPhone running iOS. This worked fine for the most part and there were few hiccups in the process.
We also wanted to target Google Android operating system devices as well. To do this, we needed to experiment with that platform.
Initially we chose to stick with something that we knew fairly well for platform development and that was the world of .NET which most of our products, Winforms and Web, leveraged.
The developer tried to get MonoDroid to utilize the picture preview handler to obtain data from the device which was in this instance, a mobile phone with two cameras, front and back. He'd run into problems before going on holidays.
With work being somewhat slow this time of year for us, I was tasked with experimenting with the native Java API for Android to see what I could get cooking. If all worked well, once the other developer returned from his holidays, I could just hand over the native implementation and let him sort out the MonoDroid side of things in .NET land.
I've had my personal Samsung Galaxy S for almost six months now and have also had Eclipse with the Android SDK installed on my development machine at home. I had yet to do anything native with the device at this point. This was going to be a learning experience that I was quite looking forward to; getting paid to do something that I love doing, writing code and learning!
I jumped into Eclipse and updated the Android SDK environment to begin the process. Having never written a line of Java in the past 25 years or so of software development, I was quite looking forward to what I was starting.
Google was my first port of call for a simple "hello world" Android native Java application. I'd found a couple I was happy with and began to understand the flow of things; I'm a quick study of new languages.
Now that a basic application was building and deploying to the Android simulator environment that I had set up, the next logical step for me was to get the application to display live preview images from the rear facing camera within the same Android simulator environment.
A bit of research suggests that this methodology is in fact supported. So I dive into the API documentation on the Android site and begin to cobble together something resembling what I think will work. No surprise here, ran into a couple issues with a new language construct and have to rewire my thinking process. A few iterations and deployments to the Android simulator environment later and I finally have live simulated video from the camera being displayed within the application.
Now..onto the more difficult task of grabbing that video and saving out images as they're taken in near real time (as much as is feasible which I'll cover later).
I hook into the preview call back for the camera. This gets me the camera instance and the raw byte array of data that is fed from the camera. Excellent, this may be easier than I think!
Wrong! Android has a weird way of working in that you can't just set something without knowing what the capabilities of the something you are setting are. The camera is one of these somethings.
I believed that I sorted that out only to find out that image formats across vendor implementation and deployments of the Android operating system were not consistent. *sigh* I've read that before recently in an article from a software developer that produced games. Fine, I'll find the image format, which ended up being YUV and write my own decoder.
Well, seems this has been done before and I had just reinvented the wheel. Go me! I look into the specifics of the version of Android that our development devices had as well as my own personal phone. Jackpot! They've got 2.2update1 or better! This is a wonderful thing because these versions support YUV via the YuvImage class which can be converted to another format that I am interested in which is JPEG.
A bit more hacking on the code and I finally have the application now grabbing data from the preview call back for the camera and successfully saving out that data to what should be JPEG files at a hard coded location within the file system.
A quick reconfiguration of the physical test device to gain access to the internal storage so that the images can be examined and previewed from the development machine results in a positive and expected outcome. I now have JPEG images that I can do something with.
With my task now complete, I rejoice by telling others in the office of how things went and how pleased I was with the results and the learning experience of a new language and platform that I hopefully will continue on with as time goes on.
The target of the experiment wasn't so much the saving of files, it was a convenient way of verifying results. When the developer returned from holidays, he was able to take the image byte array from the device and stream that from the device to a connected client browser via HTTP.
Mission accomplished.
The purpose of this article was to share an experience as well as some code that others may benefit from to help them learn from my experiences as I had from others that inspired the completeness of this application. I hope that I have achieved that purpose.
The code for the application follows:
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.List;
import android.app.Activity;
import android.content.res.Configuration;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.ImageFormat;
import android.graphics.PixelFormat;
import android.graphics.Rect;
import android.graphics.YuvImage;
import android.hardware.Camera;
import android.hardware.Camera.PreviewCallback;
import android.os.Bundle;
import android.util.Log;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
public class ImageTest extends Activity
{
private SurfaceView surfaceView = null;
private SurfaceHolder surfaceHolder = null;
private Camera camera;
private int imageCount = 0;
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState)
{
super.onCreate( savedInstanceState );
setContentView( R.layout.main );
surfaceView = (SurfaceView) findViewById( R.id.SurfaceView01 );
surfaceHolder = surfaceView.getHolder();
surfaceHolder.setType( SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS );
surfaceHolder.addCallback( surfaceCallback );
Log.e( getLocalClassName(), "END: onCreate" );
}
@Override
public void onConfigurationChanged( Configuration newConfig )
{
super.onConfigurationChanged( newConfig );
}
SurfaceHolder.Callback surfaceCallback = new SurfaceHolder.Callback() {
public void surfaceCreated( SurfaceHolder holder )
{
camera = Camera.open();
imageCount = 0;
try {
camera.setPreviewDisplay( surfaceHolder );
} catch ( Throwable t )
{
Log.e( "surfaceCallback", "Exception in setPreviewDisplay()", t );
}
Log.e( getLocalClassName(), "END: surfaceCreated" );
}
public void surfaceChanged( SurfaceHolder holder, int format, int width, int height )
{
if ( camera != null )
{
camera.setPreviewCallback( new PreviewCallback() {
public void onPreviewFrame( byte[] data, Camera camera ) {
if ( camera != null )
{
Camera.Parameters parameters = camera.getParameters();
int imageFormat = parameters.getPreviewFormat();
Bitmap bitmap = null;
if ( imageFormat == ImageFormat.NV21 )
{
int w = parameters.getPreviewSize().width;
int h = parameters.getPreviewSize().height;
YuvImage yuvImage = new YuvImage( data, imageFormat, w, h, null );
Rect rect = new Rect( 0, 0, w, h );
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
yuvImage.compressToJpeg( rect, 100, outputStream );
bitmap = BitmapFactory.decodeByteArray( outputStream.toByteArray(), 0, outputStream.size() );
}
else if ( imageFormat == ImageFormat.JPEG || imageFormat == ImageFormat.RGB_565 )
{
bitmap = BitmapFactory.decodeByteArray( data, 0, data.length );
}
if ( bitmap != null )
{
FileOutputStream fileStream;
try {
String filePath = "/sdcard/image" + (imageCount++) + ".jpg";
File imageFile = new File( filePath );
fileStream = new FileOutputStream( imageFile );
bitmap.compress(Bitmap.CompressFormat.JPEG, 80, fileStream);
fileStream.flush();
fileStream.close();
} catch (FileNotFoundException e) {
Log.e( getLocalClassName(), e.toString() );
} catch (IOException e) {
Log.e( getLocalClassName(), e.toString() );
}
bitmap.recycle();
bitmap = null;
}
Log.e( getLocalClassName(), "onPreviewFrame" );
}
else
{
Log.e( getLocalClassName(), "Camera is null" );
}
}
});
Camera.Parameters parameters = camera.getParameters();
if ( parameters != null )
{
parameters.setPreviewFrameRate( 1 );
parameters.setPreviewSize( width, height );
camera.setParameters( parameters );
camera.startPreview();
}
}
}
public void surfaceDestroyed(SurfaceHolder holder) {
if ( camera != null )
{
camera.stopPreview();
camera.release();
camera = null;
}
Log.e( getLocalClassName(), "END: surfaceDestroyed" );
}
};
}
Subscribe to:
Posts (Atom)