- Modifié
Depth of Field post processing effect - is this possible?
Hi Harald and everyone :grinteeth:
This question is a doozy - maybe it's easier than it seems, or maybe it's not possible at all. I'm trying to achieve the Depth of Field post processing effect that is used in Hollow Knight (where foreground and background gets blurred). I would like for my artist to be able to move sprites/spine animations on the Z-axis for this effect when setting up scenes.
Quick note on Hollow Knight - I've been doing as much googling/research as I can to figure out how they did their Depth of Field effect, and some people were saying that they may have just baked the blur into their graphics. But this doesn't seem to be the case, because there is a hack/mod for Hollow Knight that disables the DoF effect, which essentially renders the background as crisp and not blurred.
I'm testing things out with URP (normal URP pipeline, not the 2D Renderer), using the Spine/Sprite/Unlit shader with a perspective camera. I've looked at these two posts and believe I understand most of everything that is being discussed:
Sprite shader depth write
Post Processing with URP not working with Spine Assets
The first thing I'd like to figure out is if it's even theoretically possible for this to work in the configuration I need. My game is overhead view (https://perennialordergame.com/), and I haven't seen any other "2D" overhead view games with this sort of effect. Would I still be able to keep (or replace with an equivalent/custom setup) my sprite/spine rendering order - I use the Transparency Sort Mode along the Y axis. One thing that I read in a couple places is that the Transparent render queue "doesn't work" with DoF. Does that mean it is not possible at all, or that you'd have to have a special configuration to get it to work? I noticed that when I enable "Write to Depth" on the material using Spine/Sprite/Unlit, that it switches from Transparent render queue to AlphaTest. I looked up AlphaTest and it appears to be "geometry" based (aka can't customize its rendering order?).
Side note - for what it's worth, having this DoF feature would be somewhat of a game changer for our design process. Having the ability to have animated spine graphics blurred in the foreground/background would be amazing, so if getting this to work means using a specific configuration of render pipeline and any other custom stuff, I would be all for giving it a try, even if it means a lot of extra work. Thanks anyone for any help!!
Doing some more googling and found this - may or may not be related. I'm not quite at the 'testing' stage yet - still gathering info and trying to make sure I understand what is going on. I've worked solely in 2D, so this is sort of a combination of some of the stuff that usually happens in 3D (writing to depth) but combining it with 2D workflow. I feel like I understand the theory, but not the implementation yet if that makes sense :lol:
In the new 2D render pipeline use camera stacking with Gaussian Blur for your front and back camera.
If you don't need to change the DoF then I would recommend to use the post-processing effect only for the character(s) "layer" and front and back layers DoF in photoshop.
foriero a écritIn the new 2D render pipeline use camera stacking with Gaussian Blur for your front and back camera.
If you don't need to change the DoF then I would recommend to use the post-processing effect only for the character(s) "layer" and front and back layers DoF in photoshop.
I'd like to be able to have varying amounts of blur, and not a predefined amount - so I'd really like to try to get DoF working. The game is overhead view, so for example if you were in a forest, I'd like to be able to overlap multiple "leafy branch" sprites in the foreground, each at a different "level" (so the ones that are higher up are blurrier), as though you are peering down into the forest from above.
Alright, so I don't want to quite give up yet, but the more I read about this the less practical it seems to be for me to implement in my project specifically.
1) I couldn't find any examples of rendering sprites in the Transparent queue (such that I could maintain my y-axis sorting).
2) I'm not sure about this since I haven't been able to set it up to test, but if I was able to get the two-render-pass setup that Harald mentioned here Sprite shader depth write (the reply with screenshots of the Hero character) - what would happen if I rendered a particle effect "on top of" an area of the screen that is in the background (shifted backwards on Z so that it is blurry)? If I understand correctly, the depth (and therefore the blur) would either count as the background and be blurry (and so pixels of the particle would look blurry) or the depth would count as the particle effect, and not have any blur? AKA it wouldn't look the same as rendering a particle effect on top of a blurred background, right?
3) If the two-render-pass setup mentioned above means that I have to render all of my sprites/spine animations twice, that might be too expensive to do at all.
I'm wondering if Hollow Knight bypasses issue #2 because their art style is "solid colors with black border all around", and so there is a distinguished 'edge' to their art.
I'm going to think about it some more, but unless I'm misunderstanding things I think I may need to go a different route with the blurring. Like potentially set up the Foreground/Background cameras with blur, like @foriero mentioned :think:
Tails of Iron : https://youtu.be/0iKhPM-wrpQ
Ori Wisps SWITCH : https://youtu.be/kH6wTpIObxE
Ori Wisps NextGen : https://youtu.be/HxOUpb5UrRk
If you want to change the DoF dynamically in the game. For example move some charter from back to forth and you want to focus him with DoF then the two Camera system is all you need with Gaussian Blur for mobile and for PC/Consoles use Bokeh with more refined control.
Unity URP 2D PostProcessing : https://docs.unity3d.com/Packages/com.unity.render-pipelines.universal@10.2/manual/integration-with-post-processing.html#post-proc-how-to
Unity URP 2D Camera Stacking : https://docs.unity3d.com/Packages/com.unity.render-pipelines.universal@10.2/manual/cameras-multiple.html
foriero a écritTails of Iron : https://youtu.be/0iKhPM-wrpQ
Ori Wisps SWITCH : https://youtu.be/kH6wTpIObxE
Ori Wisps NextGen : https://youtu.be/HxOUpb5UrRkIf you want to change the DoF dynamically in the game. For example move some charter from back to forth and you want to focus him with DoF then the two Camera system is all you need with Gaussian Blur for mobile and for PC/Consoles use Bokeh with more refined control.
Unity URP 2D PostProcessing : https://docs.unity3d.com/Packages/com.unity.render-pipelines.universal@10.2/manual/integration-with-post-processing.html#post-proc-how-to
Unity URP 2D Camera Stacking : https://docs.unity3d.com/Packages/com.unity.render-pipelines.universal@10.2/manual/cameras-multiple.html
Hey Foreiro, thanks again for more info and examples!
My game is overhead view (not a side-scroller), so I want to make sure what you're suggesting is still valid. I could set up a separate camera for the "background" elements and apply DoF to them, and just manually sort them along the Z-axis (since most of that stuff will be stationary, and wouldn't move around). But my main camera that renders all of the characters/etc (everything that isn't in the background or foreground) would need to be rendered in the Transparent queue, and be sorted along the y-axis, like how all overhead view 2D games do it.
Is it possible to apply different post-processing effects to different cameras? AKA if I wanted to apply DoF & bloom to the background camera, but I only wanted to apply bloom to the 'main' camera? I thought you could only have a single PP active, and you just selected which cameras used it. I'm googling it and there seems to be conflicting information, or maybe it depends on which Rendering Pipeline you're using.
I'll look more into it!
That is exactly what you can do. In 2D renderer you don't use PP v2 since that render path has its own PP implementation. Create an Empty Object. Add Volume on it. Set that object to be on a specific layer and then in your stacked camera specify your culling layer.
Regarding the sorting in Y. Go graphics settings and there is Vector 3 sorting. Instead of 1 in Z write 1 in Y and you will get your sorting right. For Y render fights use https://docs.unity3d.com/ScriptReference/Rendering.SortingGroup.html
BTW if your background layer won't change the DoF then do fake DoF in photoshop for all your background sprites. ImageMagick can also help. https://legacy.imagemagick.org/Usage/blur/
foriero a écritThat is exactly what you can do. In 2D renderer you don't use PP v2 since that render path has its own PP implementation. Create an Empty Object. Add Volume on it. Set that object to be on a specific layer and then in your stacked camera specify your culling layer.
Regarding the sorting in Y. Go graphics settings and there is Vector 3 sorting. Instead of 1 in Z write 1 in Y and you will get your sorting right. For Y render fights use https://docs.unity3d.com/ScriptReference/Rendering.SortingGroup.html
BTW if your background layer won't change the DoF then do fake DoF in photoshop for all your background sprites. ImageMagick can also help. https://legacy.imagemagick.org/Usage/blur/
Awesome, thanks - I tested out the PP Volumes using the different Layers and it seems to work. Here is the Layers/Volume Mask that Foriero is talking about: https://forum.unity.com/threads/camera-stacking-urp-how-to-apply-different-post-processing-to-the-ui.830778/#post-5492010
I've been using the y-axis sorting & Sorting Groups. For the 2D Renderer its located on the Renderer Data itself and not in the Graphics settings:
The issue is that those settings are for the "Transparent Queue" rendering range, which is what all Sprites/Spine are rendered in by default. The Transparent Queue doesn't write to depth, so it doesn't work with DoF. To get a sprite/spine to work with DoF, you check the box that says "write to depth" in the shader, and when you select that box it switches the render queue to AlphaTest instead of Transparent:
So the solution for me could be to use the two cameras (for simplicity lets just say I'm only rendering a blurred background, and the "main" layer, and not a foreground layer). The background camera could use DoF and render sprites that are in the AlphaTest render queue, and would be sorted as though they were 3D, so I could probably just tweak their Z-positions to get them sorted correctly. Things in the background also probably wont be moving around much, so as long as their initial Z-position is correct then everything should be good with them. Then I could have another camera that doesn't use DoF, and it renders my "main layer" of stuff like characters/etc. all who use the normal transparent-queue shaders, and will be sorted using the transparency queue Y-axis sorting.
One annoying issue that I just found out though is that DoF doesn't work at all with the 2D Renderer. DoF works fine with the URP Forward Renderer, but when I then apply 2D Renderer everything just becomes blurred. This guy on the forums even mentions how Ori couldn't be made with the 2D Renderer: https://forum.unity.com/threads/2d-renderer-in-universal-render-pipeline-in-2019-3.829851/#post-5644039 I also don't think that the Tails of Iron game is actually using DoF? They had some foreground plants that had blur on them, but it might have just been baked into the image.
A lot of stuff to think about haha - I think I'll set up a little test scene with the URP Forward Renderer and at least see if I can get the multi-camera setup working with the sorting-order stuff. :detective:
foriero a écritBTW if your background layer won't change the DoF then do fake DoF in photoshop for all your background sprites. ImageMagick can also help. https://legacy.imagemagick.org/Usage/blur/
I keep seeing ImageMagick being used all over the place - I need to learn how it works haha. If I can't get the DoF working and need to resort to baking the blur into the images, then writing a script/batch that uses ImageMagick could be a great alternative. Will definitely keep that in mind, and thanks for the suggestion
Edit: Oh yeah, the problem with baking in the blur is that I couldn't do it with a Spine animation
Just wanted to say thanks for the suggestions/discussion so far Foriero :nerd:
Just to sort of demonstrate what I'm aiming to do - this mockup shows the overhead viewing angle of my game, and has the foreground elements (leafy plants) and the "background" which would be the ground way down below. Its supposed to be as though you were on a mountain or a cliff or something, and could see down below:
Yep. Sorting Order on the Renderer. Not in Graphics Settings. That is correct. Missed that one.
Z Depth is correct. You can stack "x" cameras and apply gaussian blur on each of them and then write a simple script to simulate Z Depth DoF. :-)
2D Renderer is in Preview and I'm sure they will fix it. If not then solution above is the way.
We use ImageMagick extensively in our IntergratorTool2D ( search in asset store )
foriero a écritZ Depth is correct. You can stack "x" cameras and apply gaussian blur on each of them and then write a simple script to simulate Z Depth DoF. :-)
Is this what you mean:
1) Make "x" cameras, each with gaussian blur PP on them, and each only rendering a single unity-Layer
2) Place sprites/spine in the scene, and then 'set their amount of blur' by applying their Layer to the appropriate one for that camera. (Or do something fancy like in the Tails of Iron video where the Layer is automatically set just by moving the sprite/spine on the Z-axis)
That could be another option too. It would also mean I could render everything as a normal sprite in the Transparent queue, and use my normal y-axis sorting order throughout. The only downside I see is each camera "layer" you have would be processing the blur PP effect, even if nothing new is being rendered for that camera. Correct me if I'm wrong on that. Of course, if nothing is being rendered on a camera, I assume I could just disable that camera. But if (for some reason, lol) I had 8 cameras and had a scene that required 8 'levels of blur', it might be pretty slow. I'm at the stage where I'm still theorizing how we could use this effect in our project, so I'm not sure if having just 3 levels of blur would be sufficient, or if we'd ever want more :lol:
I think my ideal setup right now is a single camera for the "background" with DoF, and just positioning things & their sorting order based on their Z-position. And then just having my normal camera for all of the 'main' layer stuff, using normal sprites/spine with the transparency y-axis sorting. Since DoF doesn't work with the 2D Renderer, I'll have to use the Forward Renderer and test out how the lighting works. I've only using the 2D Lights before :confused: Going to give this a try though!
Also damn that Integrator tool looks pretty in depth! Saw the example with the Bee where it can generate a prefab straight from the psd. Impressive :yes:
Or super simple solution. Stretch 100x100px Blur overlay png. :-) That would be likely the most computationally efficient solution.
Otherwise your course of thinking is right for both 1st and 2nd points. Anyway I think we need to "push" Unity to have DoF working with Depth.
- Modifié
foriero a écritAnyway I think we need to "push" Unity to have DoF working with Depth.
Haha yeah that would be nice. There are quite a few features that the 2D Renderer is missing. The 2D Lights are really nice, but I personally feel like they announced the 2D Renderer too early. They only recently added the camera stacking, and you still can't use GrabPass in shaders (unless they added that recently, but I haven't read about it). The guys at unity working on it have been saying that they are focusing on optimization improvements, and have a big list of features that they're planning to add (I believe the Depth option was one of them). It's just unfortunate that there isn't an actual timeline for that, so who knows if it's going to be 6 months or 2 years :scared:
But on a lighter note, I believe I have a configuration that will work well with my setup to let me use DoF.
Quick video I made of a mockup "cliff" sort of thing - the DoF part is in the second half of the video: https://i.imgur.com/hypBxzd.mp4
Basically as Foriero was suggesting, to use camera stacking and have the Base camera render the 'background' layer with DoF, and then have another camera that renders the 'main' layer.
Since the 'background' area is using the sprites/spine on the AlphaTest render queue, they need to be manually 'sorted' by adjusting their Z-position. Meaning if there is some ground below with trees on it, the ground would need to be set to whatever Z distance (lets say 5), and then placing a tree on top would need to be slightly closer, like 4.9999. If I place another tree down that should overlap the first tree and be in front of it, I would set its Z-value to 4.9998. I made a little editor window helper that lets me nudge the Z-position value by +/- 0.0001. There might be a better way of doing this, so if anyone has a suggestion I'd be glad to hear it. I'm still working on the 'workflow' of using the perspective camera to set up the scene, but it's going pretty well!
Edit: A better solution to emulate the y-axis sorting for this setup would probably be something like this:
1) Create an empty gameobject container that will have all of the ground (dirt/grass/etc) sprites for a "area" in the background (like the ground below the cliff in my mockup video). Set its Z-value to whatever it should be (was 5 in my case). Add all of the dirt/grass/etc sprites to it. These are all fine to stay at Z = 5 position.
2) Now create another empty gameobject container that will have all of the stuff that would normally need y-axis sorting, like plants and trees. Set the Z-value for this empty gameobject container to slightly higher than the ground-sprites container (like 5.0500).
3) Add all of the sprites/spine for plants/trees/etc to the container and position them in 2D space (their local Z should be 0, so they should still be at world Z of 5.0500 at this point). Ignore Z-fighting for now.
4) Now if I create an editor window with a button such that I can select the container gameobject for the plants/trees, and press this button to have it create a temporary list of all child gameobjects, sort that list by their y-value, and then adjust their Z-value based on their position in the list. So the first object wouldn't adjust the Z-value at all, the second object would adjust it by -0.0001, third would adjust it by -0.0002, etc.
So essentially converting the y-axis sorting idea into tiny adjustments on the Z-axis, and doing it as a batch for each "area" that is in the background parallax/DoF.
Nice. Thanks for sharing James.
James, please take a look here : https://forum.unity.com/threads/2d-renderer-pp-dof-does-not-work.1032112/
Do you see we need other DoF system than the two I shortly outlined there?
foriero a écritJames, please take a look here : https://forum.unity.com/threads/2d-renderer-pp-dof-does-not-work.1032112/
Do you see we need other DoF system than the two I shortly outlined there?
There is a forum specifically for the 2D Renderer here: https://forum.unity.com/forums/2d-experimental-preview.104/
On that same unity forum link I posted earlier (talking about Ori), a unity developer responded specifically about the request for depth: https://forum.unity.com/threads/2d-renderer-in-universal-render-pipeline-in-2019-3.829851/#post-5646670
There is also an ongoing thread about feature requests for the 2D renderer here: https://forum.unity.com/threads/current-2d-renderers-missing-core-features.1023772/
Maybe could post on that thread too?
Huh, so I wasn't expecting this - there isn't a "Sprite-Lit-Default" sort of material for sprites to be lit by 3D lighting using the regular URP Forward Renderer. Unless I'm missing something? :grinfake
Alright, looks like I've got some more research to do. I just looked up how Hollow Knight did their lighting, and apparently it was done by using soft sprites with additive blending, and not actually a light system. We're using normal-maps so we wouldn't be able to do it like that. I'll need to figure out how to get everything working with the 3D lights.
I was able to get a Spine animation lit by using the URP/Spine/Sprite shader - including its normal-maps working with a Point Light (for some reason the Spot Light didn't light up the spine animation). Will look more into all this tomorrow :yes:
Just thinking out loud here, but I wonder if I could render my "main" gameplay stuff with the 2D Renderer, and only use the Forward Renderer for the 'background DoF' stuff :think:
Alright decided to stay up a bit longer and try to test this out haha. It looks like you can indeed use the Forward Renderer and the 2D Renderer on separate cameras - even if they are in the same Camera Stack. On the Camera component you can select which Renderer you want to use for that camera. You can add Renderer options to that list by going to the URP Pipeline Asset and adding Renderers to the Renderer List.
With this configuration I could use the Forward Renderer just for my background DoF stuff, and potentially "light" it using the additive-blend sprite overlay method used in Hollow Knight (and just ignore using normal-maps for the background). Gonna need to test it out, but could be cool!
Nice. Yes the new render system is pretty flexible. On the other hand the new 2D lighting gives you more stylistic options over your whole scene. Our release date is 3 years away so we will cooperate with unity to get things done during that time.
foriero a écritNice. Yes the new render system is pretty flexible. On the other hand the new 2D lighting gives you more stylistic options over your whole scene. Our release date is 3 years away so we will cooperate with unity to get things done during that time.
Yeah, I didn't realize how few light-types there are with the regular 3D lighting. The sprite-shape lights and polygon-shape lights with the 2D Renderer are pretty awesome. My guess/hope is that they'll get a lot of the desired features into the 2D Renderer within a year.
I tried to do a bit more research on Hollow Knight's lighting system, and the two tutorial videos I found were both wrong :lol:
I think this may have basically been the way that they do it: https://github.com/prime31/SpriteLightKit I think its just some sort of
basic setup where a separate camera has a black background and then light-sprites are drawn onto it, and then that camera's Render Texture is used with multiply blending (and/or some blending mode that maintains the tint of the sprite-lights). It's definitely just a lighting "overlay". I downloaded that Unity Project and am testing it out, going to see exactly how they're doing it and then set up something similar (this example project is from Unity 2018, so not sure how compatible the shaders and stuff will be).
Sorry for the late reply and thanks for keeping the discussion so active Marek, greatly appreciate all your postings! :nerd:
Admittedly I didn't yet read the bottom postings, but noticed something that at least appears partly incorrect to me:
Jamez0r a écritI noticed that when I enable "Write to Depth" on the material using Spine/Sprite/Unlit, that it switches from Transparent render queue to AlphaTest. I looked up AlphaTest and it appears to be "geometry" based (aka can't customize its rendering order?).
You should be able to offset the render queue value to anything you like via the Render Queue Offset
material parameter. The grayed out Render Queue
parameter displays where you currently end up. So you should be able to customize rendering order.
I will hopefully revisit the thread tomorrow, for today I will try to answer as many threads with remaining open question as possible .
Hey Harald! Welcome back, hope you had a great holiday break :grinteeth:
Absolutely no rush to read this whole thread - I sort of branched off from being directly Spine-related, but I wanted to keep posting my findings (and get Marek's feedback) in case anyone else ends up wanting to do something similar. Right now I'm testing a configuration that I think will work well, and once I settle on a good setup I'll definitely update this post with the full configuration.
Harald a écritYou should be able to offset the render queue value to anything you like via the
Render Queue Offset
material parameter. The grayed outRender Queue
parameter displays where you currently end up. So you should be able to customize rendering order.
You know, I did try doing that, bumping up the offset so that the Queue ended up in the Transparent range, and it made the sprite appear blurry always regardless of its z-value. It's possible that I had something configured wrong (I don't thiiiink so, but it's possible). It acted the same way a normal sprite with a default sprite shader would look if it was placed into the scene with DoF enabled (always blurry). I read in a couple places on the Unity forums that the Transparent queue doesn't work the same way with writing to depth (like, it either doesn't write to depth at all or it targets a separate depth texture or something).
But yeah, take your time with doing whatever else you need to do - I have a configuration right now that seems to be working great :yes:
Very glad to hear you've got a working setup!
Thanks for your kind words, I hope you had great Christmas holidays as well! Happy new year!
Nice to have you back Harald. :-) May I have a question regarding the status quo of Spine URP shaders for URP 2D Renderer? Any limitations or not supported features?