Use adobe illustrator for UI design - use inner shadow effect in AI, and others

2021-10-14 11:47作者:admin来源:未知>次阅读

1. I have used the inner shadow effect

for a long time. One of the things I am most dissatisfied with adobe illustrator is that there is no inner shadow effect in AI, so I can only use the inner light effect to roughly simulate. However, the inner glow effect cannot set the offset of the effect, so it has limitations. However, recently, a method has been found on the Internet to quickly and easily create the inner shadow effect in AI:

first, create a new document, use the rounded rectangle tool or the rectangle tool + rounded effect to draw such a figure:

then, execute the menu command \ <!-- Shadow Offset --> < feOffset dx=' 5' dy=' 5' /> <!-- Shadow Blur --> < feGaussianBlur stdDeviation=' 3' result=' offset-blur' /> <!-- Invert the drop shadow to create an inner shadow --> < feComposite operator=' out' in=' SourceGraphic' in2=' offset-blur' result=' inverse' /> <!-- Color & Opacity --> < feFlood flood-color=' black' flood-opacity=' 0.75' result=' color' /> <!-- Clip color inside shadow --> < feComposite operator=' in' in=' color' in2=' inverse' result=' shadow' /> <!-- Put shadow over original object --> < feComposite operator=' over' in=' shadow' in2=' SourceGraphic' /> </ filter>

finally, click \ flood-color=' black' flood-opacity=' 0.75' result=' color' /> Use the flood color and flood opacity attributes in the

tag. This method should be the most labor-saving and effective way to create inner shadows in AI. Note, however, that this filter should usually be placed at the end of the effects panel, because the graphics are rasterized after applying this filter.

note: this method comes from the discussion on stackeexchange: Inner Shadow issue in illustrator CS5. The earlier source is svgquickref.com: the leading SVG quick ref site on the net. However, the domain name of this site has expired and cannot be accessed normally = =

2. And others (how it works)

. In the above example, Using the SVG filter function of AI, we wrote a filter ourselves and applied it to the rounded rectangle to generate the inner shadow effect. Out of curiosity, I studied how it works and found that the water here is quite deep. Next, let's talk about the SVG filter function in AI. The following part should be expanded reading. It may be a little obscure and difficult to understand, so it is not required to master. However, after mastering the application of SVG filter, you can write some simple filters to use in AI.

first, let's talk about what SVG is. The full name of SVG is scalable vector graphic, which can scale vector graphics. SVG is actually a plain text XML format, which defines the shape, fill color, stroke, etc. of graphics through XML. In Wikipedia, SVG format is widely used as national flag, national emblem, map and information map.

SVG can not only describe the shape, fill color and stroke of vector graphics, but also use filters to further modify the graphics. The filters here are SVG filters. There is little information about SVG filter on the Internet, whether in Chinese or English. However, according to the limited information available, I have a general understanding of what SVG filter is and how to use it.

The

SVG filter is in plain text XML format just as the vector graph is described in SVG format. Svg filter to% 26lt; filter> Start with% 26lt/ filter> End of label. At% 26lt; filter> And% 26lt/ filter> The part inside the label is the definition of the filter. In AI,% 26lt; filter> The ID attribute in the tag is the filter name displayed in the SVG filter panel.

An important concept of

SVG filter is filter primitives. Each basic filter can perform a specific modification function, such as color transformation. All basic filters are named after" fe" It should be the abbreviation of filter effect or filter element, such as% 26lt; feFlood>, < feOffset dx=' 5' dy=' 5' />

This step,% 26lt; feOffset> The basic filter shifts the original image down and right by five pixels (note that the bounding box in the above figure is compared with the background grid):

then

% 26lt-- Shadow Blur -->
< feGaussianBlur
stdDeviation=' 3'
result=' offset-blur'
/>

< feGaussianBlur> The function of the basic filter is just like its name to do Gaussian blur for the graph. The stddeviation parameter is the standard deviation of Gaussian blur, which determines the blur radius. After Gaussian blur is made for the above figure, the result is temporarily stored in the buffer with the name of" offset-blur": < feComposite operator=' out' in=' SourceGraphic' in2=' offset-blur' result=' inverse' />

< feComposite> The function of the command is to combine two diagrams into one according to certain rules. You need to specify a mixed mode. There are six mixed modes: arithmetic, over, in, out, atop and XOR. Arithmetic is an arithmetic mode with four parameters, K1, K2, K3 and K4, which need to be specified manually; The splicing rule is to transform each channel of [R, G, B, a]: result = K1 * in * in2 + K2 * in + K3 * in2 + K4, in and in2 are the size of each individual channel component of each pixel of the two images, and result is the value of a color channel of the composite image. The six mixed modes of over, in, out, atop and XOR are shown in the figure:

< feComposite> You need to specify two images as input. In this example, the input image I, specified by the in parameter, is the source image, sourcegraphic; Input image 2, which is specified by in2 parameter, is the previous step< feGaussianBlur> The output result offset result of the basic filter placed in the buffer. Then let the two images do the out operation to get the following result:

finally, store the above splicing result in the buffer and name it inverse.

if you want to know more about image synthesis, you can refer to this paper: http://delivery.acm.org/10.1145/810000/808606/p253-porter.pdf

below is% 26lt; feFlood> Basic filter:

% 26lt-- Color &  Opacity -->
< feFlood
flood-color=' black'
flood-opacity=' 0.75'
result=' color'
/> 
< feFlood> Command to color fill the area controlled by the SVG filter. In this example, the SVG filter control area is filled with black and 75% visibility:

note% 26lt; feFlood> The basic filter has no input parameters. The filled result is output to a buffer named color.

then another% 26lt; feComposite> Command:

% 26lt-- Clip color inside shadow -->
< feComposite
operator=' in'
in=' color'
in2=' inverse'
result=' shadow'
/>

this time, the color and inverse images in the buffer are used for in operation to make the inner shadow, and the results are saved in the shadow, as shown in the figure:

finally,% 26lt; feComposite> Merge shadow and source image:

% 26lt-- Put shadow over original object -->
< feComposite
operator=' over'
in=' shadow'
in2=' SourceGraphic'
/>
After

is assembled, the final required result will be obtained:

the step of generating inner shadow using AI's SVG filter is almost like this. If you review the whole process, you will find that many basic filters take the output of a step as the input of this step. Then, you can draw a node diagram according to the relationship between the input and output of the basic filter. The node diagram of the basic filter in our example can be drawn like this:

so that the relationship between the basic filters is clear at a glance.

in addition, the SVG filter has some other interesting features. Another example: enhance the bump effect of the image.

if you have used some 3D creation software or game engines like unity, you may know that in the computer world, the concave convex feeling of object surface can be achieved by concave convex map or normal map. Bump map is to give a bump texture other than color texture. The reason for bump texture is gray scale. The whiter the place, the higher the height of the point. On the contrary, the darker the place, the lower the height. The normal map is similar, except that the bump texture represented by gray is replaced by the surface normal texture represented by color. The [R, G, b] color of each pixel corresponds to the size of the [x, y, Z] component of the point normal. Through bump mapping and normal mapping, the realism of objects in the computer world can be greatly increased, and the burden of computer rendering will not be excessively increased by adding too many geometric details like directly creating polygons:

as shown in the screenshot of crazybump software above, the stone wall texture is used here, and the bump texture is generated through the gray scale of the stone wall texture itself, The computer generates the final realistic color image according to the concave convex texture, color texture, the color and direction of incident light and the direction of viewing angle. {1


热门排行

最新文章

Powered by 世界滤芯新闻网 @2018 RSS地图 HTML地图

Copyright 365建站 © 2008-2021 华绿环保 版权所有