bradlarson / gpuimage2 Goto Github PK
View Code? Open in Web Editor NEWGPUImage 2 is a BSD-licensed Swift framework for GPU-accelerated video and image processing.
License: BSD 3-Clause "New" or "Revised" License
GPUImage 2 is a BSD-licensed Swift framework for GPU-accelerated video and image processing.
License: BSD 3-Clause "New" or "Revised" License
I'm running XCode 7.3.1. I've triple checked I've followed the set up instructions.
Any filter I run, via any method (filterWithOperation, filterWithPipeline, or using PictureOutput()) the outputImage is always just a red fill.
Maybe I'm doing something wrong, but maybe others are running into this?
On iPhone<=4 the RenderView doesn't seem to match the proper screen size as the GPUImage 1.x did.
Hi,
I found that the simpleVideoRecorder example is not working.
When I press the record button, the screen flashes for one time and then nothing happens - not recording, button label didn't change to "Stop", no video found in photo library. My device is iPad mini 2.
Here is the screencast: https://streamable.com/96v2
After some logging I found that when the record button is pressed, both the if
part and else
part of the capture
function are called. Refer to below:
@IBAction func capture(sender: AnyObject) {
print("CLICKED") // Called
if (!isRecording) {
do {
self.isRecording = true
let documentsDir = try NSFileManager.defaultManager().URLForDirectory(.DocumentDirectory, inDomain:.UserDomainMask, appropriateForURL:nil, create:true)
let fileURL = NSURL(string:"test.mp4", relativeToURL:documentsDir)!
do {
try NSFileManager.defaultManager().removeItemAtURL(fileURL)
} catch {
}
movieOutput = try MovieOutput(URL:fileURL, size:Size(width:480, height:640), liveVideo:true)
camera.audioEncodingTarget = movieOutput
filter --> movieOutput!
movieOutput!.startRecording()
(sender as! UIButton).titleLabel?.text = "Stop"
print("RECORDING!") // Called
} catch {
print("ERROR!!!!!") // Not Called
fatalError("Couldn't initialize movie, error: \(error)")
}
} else {
movieOutput?.finishRecording{
self.isRecording = false
dispatch_async(dispatch_get_main_queue()) {
(sender as! UIButton).titleLabel?.text = "Record"
print("STOPPPPPPP!!!") // Called, why?
}
print("STOPPED!!!") // Called, why?
self.camera.audioEncodingTarget = nil
self.movieOutput = nil
}
}
}
Has anyone had the same problem? Any idea how to fix it?
Stan
OutputImage is always created as nil and the value is set in a callback. The method returns a forced unwrapped optional that will always return nil and crash.
My hack to fix it is to always return self, but I am not familiar enough with the library to know if this is a correct approach.
Just trying to convert an existing project i was writing in swift to this new swift version but I cannot get the video to orientate correctly in a landscape app. It looks like the PhysicalCameraLocation enum handles the orientation. If I change it to Portrait (which should be wrong) it is upside down (which, perversely is what I want) but it would be nice to have more control.
What am I missing?
I've also had to make the avcapture device of the Camera public as I need to change focus and exposure.
Hello again!
Was having some trouble getting a MonochromeFilter and a PictureOutput to receive frame buffers until I made them class variables and defined them later. Not sure if this is intended or not, but didn't seem normal to me. My code is very simple and was just able to test that when my consumers were not class members, they did not even have their updateTargetsWithFramebuffer(_:)
method called.
Here's a quick sample:
class ViewController: UIViewController {
@IBOutlet cameraView: RenderView!
override func viewDidLoad() {
super.viewDidLoad()
do {
let camera = try Camera(sessionPreset: AVCaptureSessionPreset640x480, cameraDevice: nil, location: .BackFacing, captureAsYUV: true)
let pictureOutput = PictureOutput()
pictureOutput.encodedImageFormat = .PNG
pictureOutput.imageAvailableCallback = { image in
print("Got an image!")
}
let monochromeFilter = MonochromeFilter()
// Setup pipelines
camera --> monochromeFilter
camera --> cameraView
monochromeFilter --> pictureOutput
monochromeFilter.targets.forEach({ (consumer) in
print(consumer.0)
})
camera.startCapture()
} catch {
let errorAlertController = UIAlertController(title: NSLocalizedString("Error", comment: "Error"), message: "Couldn't initialize camera", preferredStyle: .Alert)
errorAlertController.addAction(UIAlertAction(title: NSLocalizedString("OK", comment: "OK"), style: .Default, handler: nil))
self.presentViewController(errorAlertController, animated: true, completion: nil)
print("Couldn't Initialize Camer: \(error)")
}
}
}
I altered the updateTargetsWithFramebuffer(_:)
in Pipeline.swift method like so and saw that only the Camera was called and the only target was the RenderView
public func updateTargetsWithFramebuffer(framebuffer:Framebuffer) {
if targets.count == 0 { // Deal with the case where no targets are attached by immediately returning framebuffer to cache
framebuffer.lock()
framebuffer.unlock()
} else {
// Lock first for each output, to guarantee proper ordering on multi-output operations
for _ in targets {
framebuffer.lock()
}
}
for (target, index) in targets {
print(self)
print(target)
target.newFramebufferAvailable(framebuffer, fromSourceIndex:index)
}
}
Its generate a version for CocoaPods?
The following Swift 3 iOS app demonstrates that after removeAllTargets is called, subsequent executions of a pipeline that includes UnsharpMask fails in various bad ways and will end up doubling frame buffers coming out of the pipeline.
https://github.com/mikebikemusic/Animate
The list of problems that occur in this app:
I use the HighlightAndShadowTint filter and change value of the shadowTintIntensity property, but nothing happen.
`
override func loadView() {
super.loadView()
picture = PictureInput(image: UIImage(named: "WID-small.jpg")!)
filter = HighlightAndShadowTint()
picture --> filter --> renderView
picture.processImage()
}
@IBAction func updateValue(sender: AnyObject) {
filter.shadowTintIntensity = shadowIntensitySlider.value
picture.processImage()
print(filter.shadowTintIntensity)
}
`
Hi Brad,
I'm trying your sample code:
let boxBlur = BoxBlur()
let contrast = ContrastAdjustment()
let myGroup = OperationGroup()
myGroup.configureGroup{input, output in
input --> self.boxBlur --> self.contrast --> output
}
"let myGroup = OperationGroup()" is always giving me the error as "'OperationGroup' cannot be constructed because it has no accessible initializers". Not sure if I missed any thing or the class has not been completed. Thanks.
runtime error at Framebuffer.swift:241
fatal error: Double value cannot be converted to Int32 because the result would be greater than Int32.max
2016-06-29 10:52:24.200599 GPUImageTest[2010:280192] fatal error: Double value cannot be converted to Int32 because the result would be greater than Int32.max
encountered when running on an iPhone 6 running iOS 10
I've been working on this program for several days without issue, then this occurred once. Thought I'd report it now in case it's really that rare.
Is there a way to apply a CoreImage filter to every frame and have that output to a renderView?
Also, is there a way to get a CMSampleBuffer back from the new filtered frame to pass into my custom encoder?
Thanks again for a great library.
This is so useful for me.
Here is my question. I want to create a GrayScale Filter, but I donot know how to create this operation.
Could you please make it?
Tks a lot!
Hi,
So i've recently been playing around and getting to know everything and i've ran into an issue where trying to process an image with this filter combo causes a black image.
I was hoping you might be able to shine some light on this
let saturation = SaturationAdjustment()
saturation.saturation = self.saturation
let pictureInput = PictureInput(image: self.image!)
let pictureOutput = PictureOutput()
pictureOutput.imageAvailableCallback = { [weak self] image in
if let me = self {
for view in me.backImageViews {
view.image = image
}
}
}
pictureInput --> saturation --> blurFilter --> pictureOutput
pictureInput.processImage()
GPUImage is not compatible with Xcode 8 using Swift 3.0.
Critical incompatibilities are still raised by setting the new Swift Legacy mode parameter in Xcode, if you try to use the automatic problem solver, the project will run but crash on start.
There is another project: https://github.com/wangjwchn/MetalAcc
It claims to be based on CPUImage1 but "using Metal and written in Swift".
Will there be more advantages using Metal?
I am trying to blend multiple layers in order to reduce noise. I see that I can blend two images, but is it possible to blend more than that (say 5 images) using GPUImage?
Thanks
I'm trying to set these parameters to floats, and here's what I get.
HistogramDisplay behaviour changed from 1.x now there is a bug that forces to run in a portrait mode proportion fashion
Is there a way to use custom objective-c gpuimage filters in GPUImage2?
Hello! Been loving using this library -- makes everything much easier.
Quick thing I noticed though: I'm using a PictureOutput to write individual frames to the disk, but I notice my memory usage never decreases after writing...
Thanks for your help!
some code:
class ImageSaver: NSOperation {
let imageData: NSData
let url: NSURL
init(imageData: NSData, url: NSURL) {
self.imageData = imageData
self.url = url
}
override func main() {
if self.cancelled { return }
if imageData.length > 0 {
#if DEBUG
print("Writing image to \(url)")
#endif
do {
try imageData.writeToURL(url, options: .DataWritingAtomic)
} catch {
print("Error writing image: \(error)")
}
}
}
}
// In my viewDidLoad() function
do {
renderView.orientation = UIApplication.sharedApplication().statusBarOrientation.toImageOrientation()
camera = try Camera(sessionPreset: AVCaptureSessionPreset640x480, cameraDevice: nil, location: .BackFacing, captureAsYUV: false)
filter = MonochromeFilter()
// Setup callback for picture data
pictureOutput = PictureOutput()
pictureOutput.encodedImageFormat = .JPEG
pictureOutput.onlyCaptureNextFrame = false
pictureOutput.encodedImageAvailableCallback = { imageData in
let imageURL = self.folderURL.URLByAppendingPathComponent(String(format:"%19.0f.jpg", CFAbsoluteTimeGetCurrent() * 1e9))
let imageSaver = ImageSaver(imageData: imageData, url: imageURL)
self.imageQueue.addOperation(imageSaver)
}
camera --> filter --> renderView
filter.addTarget(pictureOutput)
camera.startCapture()
`
you can check dir: examples/ios/simplevideofilter
config: build : GPUImage project : release version
I don't see any support for ACVs.
- (id)initWithACV:(NSString*)curveFilename;
I have been trying to swap to the front facing camera for a while now.
Can any one help me with this and also the zoom and flash.
It would be much appreciated
My bad, I was following a tutorial for GPUImage (framework written in objc)
Hi,
I just wanted to confirm the proper steps to change filters in real time as with the first GPUImage one had to remove targets and add the targets of the new filter they wanted to display.
Thanks
Crash is occurred because of Int overflow on armv7 device especially iPod 5. It can be fixed by changing Int to Int64.
How to make a camera in real time? There is a task to transfer the image from GPUImage Camera to own function for further processing, for example, of TesseractOCR.
hi brad thanks for your objective c i worked with it alot please take care of swift 3 and ios 10 in this version thanx for your gr8 help
Hi Larson, I found the code: videoOutput.setSampleBufferDelegate(self, queue:cameraProcessingQueue) in Line 131 Camera.swift, and cameraProcessingQueue is a global concurrent queue, however, the function declared in AVFoundation SDK Document shows that "A serial dispatch queue must be used to guarantee that video frames will be delivered in order."
This also happens in GPUImageVideoCamera initWithSessionPreset: cameraPosition: implementation in GPUImage framework.
It makes me confused, and I want to know why ?
Thanks.
Pipeline.swift could not compile...... I added the GPUImage-iOS.xcodeproj as a framework in Target Dependency and also in Link Binary with Libraries..........was there something else to be done besides that too....
Hi,
For iOS and macOS projets it's will be great to make your component Carthage compatible
Take a look at these articles:
On your README.md
add on top and add section for the installation with Carthage.
If I pause the session by using stopCapture() to change a filter and then try and start it again the RenderView continues to show the still from stopping capture and never gets going again.
If I pipe the camera straight into the view it stops and starts again with no problem.
I'm using a simple crop and transform filter. i call removeallTargets on the camera. Create the same pipeline but it never kicks off again. DO we have to remove the camera and start completely from scratch?
Pipeline.swift:97:29: Argument passed to call that takes no arguments
`
public func generate() -> AnyGenerator<(ImageConsumer, UInt)> {
var index = 0
return AnyGenerator { () -> (ImageConsumer, UInt)? in
if (index >= self.targets.count) {
return nil
}
while (self.targets[index].value == nil) {
self.targets.removeAtIndex(index)
if (index >= self.targets.count) {
return nil
}
}
index += 1
return (self.targets[index - 1].value!, self.targets[index - 1].indexAtTarget)
}
}
`
Raising this issue to as much so that I can follow progress as anything else. I'm sure its on the queue.
https://github.com/BradLarson/GPUImage2#filtering-and-re-encoding-a-movie
If you have benchmarking turned on, you can see in the logs that frames are still coming in after calling stopCapture. Even without benchmarking, targets are being given the frames to process. While in this state, it is unsafe to nil the camera instance, as the deinit will be done without stopping captureOutput.
Hi,
I got your simplevideorecorder example working (checkout date 2016-07-05). Two things:
Any ideas ?
Thanks for your extremely good work
Chris
i have a webview in view controller. i want to record only webview not fullscreen.
how can i do that with swift?
help me please
I'm sure you have a ton on your plate with this rewrite but I thought I'd suggest a feature I'd love to see: one to many or many to one piping. That is, starting with a single video file, output several video files or starting with several video files, composite them into one video, similar to AVComposition
.
High definition video made on iPhone with portrait orientation is screwed up because movie frames are not being scaled down properly. Put a breakpoint in movieInupt processMovieFrame and inspect bufferHeight and bufferWidth.
My github decided to only commit the Derived Data folder, so linking to zipped project that does this:
import UIKit
import GPUImage
class ViewController: UIViewController {
private var movieFile: MovieInput?
@IBOutlet weak var renderView: RenderView!
override func viewDidLoad() {
super.viewDidLoad()
do {
movieFile = try MovieInput(url: NSBundle.mainBundle().URLForResource("IMG_2809", withExtension: "mov")!)
movieFile! --> renderView
movieFile?.start()
} catch {
print("Unable to play")
}
}
}
https://drive.google.com/open?id=0Bz6FPHkUDa41LWRuMzZGZ0VCV00
Xcode 8.0 beta 5 (8S193k)
GPUImage2/framework/Source/OperationGroup.swift:16:78: Expected type
GPUImage2/framework/Source/OperationGroup.swift:17:32: Argument passed to call that takes no arguments
GPUImage2/framework/Source/OperationGroup.swift:16:78: Expected ',' separator
Is there any option to resize the canvas when using the Transform function to resize the image with Mattrix4x4(CGAffineTransformMakeScale(..., ..)): (the image gets resized but the surrounding canvas stays the same.
Hi,
I am playing around with the chroma key filters. I have an image with a green screen background and I am trying to isolate the minion. This works well when displayed in a render view but doesn't work at all when I display the filtered image in an UIImageView
or when I write the PNG to disk. Am I using it right?
let output = PictureOutput() // instance variable
private func process() {
output.imageAvailableCallback = { image in
NSOperationQueue.mainQueue().addOperationWithBlock {
self.imageView.image = image
}
}
picture --> filter
filter --> renderView
filter --> output
picture.processImage()
}
The top view is the RenderView
, the bottom on the UIImageView
showing the original image, even though it should show the processed output?
Hi Brad,
Thanks for making a SWIFT port, learning so much studying it. Any plans to do a UI Element demo similar to Obj-C version?
It was in the first GPUImage but hasn't been ported over yet. This would also require the Parallel Coordinates transform be ported over.
Hey Brad, super excited about this new release. Great work... You inspire.
I was wondering if you had done any metrics on the new version versus the old? Are there significant performance increases?
I'm not sure how feasible this is, but for one of my use cases (where there is a lot of sharing between iOS and OSX), it would be nice to have a single universal framework along the lines of this: http://colemancda.github.io/programming/2015/02/11/universal-ios-osx-framework/
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.