SDImageCoderHelper

@interface SDImageCoderHelper : NSObject

Provide some common helper methods for building the image decoder/encoder.

  • Return an animated image with frames array. For UIKit, this will apply the patch and then create animated UIImage. The patch is because that +[UIImage animatedImageWithImages:duration:] just use the average of duration for each image. So it will not work if different frame has different duration. Therefore we repeat the specify frame for specify times to let it work. For AppKit, NSImage does not support animates other than GIF. This will try to encode the frames to GIF format and then create an animated NSImage for rendering. Attention the animated image may loss some detail if the input frames contain full alpha channel because GIF only supports 1 bit alpha channel. (For 1 pixel, either transparent or not)

    Declaration

    Objective-C

    + (UIImage *_Nullable)animatedImageWithFrames:
        (NSArray<SDImageFrame *> *_Nullable)frames;

    Swift

    class func animatedImage(with frames: [SDImageFrame]?) -> UIImage?

    Parameters

    frames

    The frames array. If no frames or frames is empty, return nil

    Return Value

    A animated image for rendering on UIImageView(UIKit) or NSImageView(AppKit)

  • Return frames array from an animated image. For UIKit, this will unapply the patch for the description above and then create frames array. This will also work for normal animated UIImage. For AppKit, NSImage does not support animates other than GIF. This will try to decode the GIF imageRep and then create frames array.

    Declaration

    Objective-C

    + (NSArray<SDImageFrame *> *_Nullable)framesFromAnimatedImage:
        (UIImage *_Nullable)animatedImage;

    Swift

    class func frames(from animatedImage: UIImage?) -> [SDImageFrame]?

    Parameters

    animatedImage

    A animated image. If it’s not animated, return nil

    Return Value

    The frames array

  • Return the shared device-dependent RGB color space. This follows The Get Rule. On iOS, it’s created with deviceRGB (if available, use sRGB). On macOS, it’s from the screen colorspace (if failed, use deviceRGB) Because it’s shared, you should not retain or release this object.

    Declaration

    Objective-C

    + (CGColorSpaceRef _Nonnull)colorSpaceGetDeviceRGB;

    Swift

    class func colorSpaceGetDeviceRGB() -> CGColorSpace

    Return Value

    The device-dependent RGB color space

  • Check whether CGImage contains alpha channel.

    Declaration

    Objective-C

    + (BOOL)CGImageContainsAlpha:(CGImageRef _Nonnull)cgImage;

    Swift

    class func cgImageContainsAlpha(_ cgImage: CGImage) -> Bool

    Parameters

    cgImage

    The CGImage

    Return Value

    Return YES if CGImage contains alpha channel, otherwise return NO

  • Create a decoded CGImage by the provided CGImage. This follows The Create Rule and you are response to call release after usage. It will detect whether image contains alpha channel, then create a new bitmap context with the same size of image, and draw it. This can ensure that the image do not need extra decoding after been set to the imageView.

    Note

    This actually call CGImageCreateDecoded:orientation: with the Up orientation.

    Declaration

    Objective-C

    + (CGImageRef _Nullable)CGImageCreateDecoded:(CGImageRef _Nonnull)cgImage;

    Swift

    class func cgImageCreateDecoded(_ cgImage: CGImage) -> CGImage?

    Parameters

    cgImage

    The CGImage

    Return Value

    A new created decoded image

  • Create a decoded CGImage by the provided CGImage and orientation. This follows The Create Rule and you are response to call release after usage. It will detect whether image contains alpha channel, then create a new bitmap context with the same size of image, and draw it. This can ensure that the image do not need extra decoding after been set to the imageView.

    Declaration

    Objective-C

    + (CGImageRef _Nullable)CGImageCreateDecoded:(CGImageRef _Nonnull)cgImage
                                     orientation:
                                         (CGImagePropertyOrientation)orientation;

    Swift

    class func cgImageCreateDecoded(_ cgImage: CGImage, orientation: CGImagePropertyOrientation) -> CGImage?

    Parameters

    cgImage

    The CGImage

    orientation

    The EXIF image orientation.

    Return Value

    A new created decoded image

  • Create a scaled CGImage by the provided CGImage and size. This follows The Create Rule and you are response to call release after usage. It will detect whether the image size matching the scale size, if not, stretch the image to the target size.

    Declaration

    Objective-C

    + (CGImageRef _Nullable)CGImageCreateScaled:(CGImageRef _Nonnull)cgImage
                                           size:(CGSize)size;

    Swift

    class func cgImageCreateScaled(_ cgImage: CGImage, size: CGSize) -> CGImage?

    Parameters

    cgImage

    The CGImage

    size

    The scale size in pixel.

    Return Value

    A new created scaled image

  • Return the decoded image by the provided image. This one unlike CGImageCreateDecoded:, will not decode the image which contains alpha channel or animated image

    Declaration

    Objective-C

    + (UIImage *_Nullable)decodedImageWithImage:(UIImage *_Nullable)image;

    Swift

    class func decodedImage(with image: UIImage?) -> UIImage?

    Parameters

    image

    The image to be decoded

    Return Value

    The decoded image

  • Return the decoded and probably scaled down image by the provided image. If the image is large than the limit size, will try to scale down. Or just works as decodedImageWithImage:

    Declaration

    Objective-C

    + (UIImage *_Nullable)decodedAndScaledDownImageWithImage:
                              (UIImage *_Nullable)image
                                                  limitBytes:(NSUInteger)bytes;

    Swift

    class func decodedAndScaledDownImage(with image: UIImage?, limitBytes bytes: UInt) -> UIImage?

    Parameters

    image

    The image to be decoded and scaled down

    bytes

    The limit bytes size. Provide 0 to use the build-in limit.

    Return Value

    The decoded and probably scaled down image

  • Control the default limit bytes to scale down largest images. This value must be larger than or equal to 1MB. Defaults to 60MB on iOS/tvOS, 90MB on macOS, 30MB on watchOS.

    Declaration

    Objective-C

    @property (class) NSUInteger defaultScaleDownLimitBytes;

    Swift

    class var defaultScaleDownLimitBytes: UInt { get set }
  • Convert an EXIF image orientation to an iOS one.

    Declaration

    Objective-C

    + (UIImageOrientation)imageOrientationFromEXIFOrientation:
        (CGImagePropertyOrientation)exifOrientation;

    Swift

    class func imageOrientation(from exifOrientation: CGImagePropertyOrientation) -> UIImage.Orientation

    Parameters

    exifOrientation

    EXIF orientation

    Return Value

    iOS orientation

  • Convert an iOS orientation to an EXIF image orientation.

    Declaration

    Objective-C

    + (CGImagePropertyOrientation)exifOrientationFromImageOrientation:
        (UIImageOrientation)imageOrientation;

    Swift

    class func exifOrientation(from imageOrientation: UIImage.Orientation) -> CGImagePropertyOrientation

    Parameters

    imageOrientation

    iOS orientation

    Return Value

    EXIF orientation