seewo ENOW Big front end
The company's official website :CVTE( Guangzhou Shiyuan Co., Ltd )
The team :CVTE Its future education Xiwo software platform center enow The team
The author of this article :
Background and purpose
Why write this series ? This question was answered in the first article of this series , You can look up .
This series focuses on code Demo
For clues , From this demo
To deeply understand the elements and links of 3D rendering in the process of building . It has the following characteristics :
One . Don't use webgl
Technology to complete 3D rendering , webgl
Specifications help us encapsulate a lot of underlying implementations , So it also blocks some important details , I hope that webgl Technology just provides docking GPU Interface for computing , Let's use GPU
To improve computing efficiency , Of course not GPU
Use only CPU
You can do the same thing , It's just less efficient , But learning is enough , It will make us clearer , So we're going to use pure JS Code to do all the operations and drawing, and finally achieve a "3D Rendering engine ";
Two . We make use of demo
To understand the process of 3D rendering , So in this process, we will divide it into several small stages , Each stage is driven by periodic goals , Sometimes it is easy to achieve the goal , When phase goals become more complex , Some of the previous implementations may be pushed out to meet more complex requirements ;
3、 ... and . Since it is with Demo
For clues and subjects , All the code is available , In this warehouse (github.com/ShaojieLiu/…), I hope you can download and run , Even build from scratch , I believe we can get something !
In the end, we will be based on 2D
Rendered API
To implement the 3D rendering engine . It can parse and render the commonly used 3D model data formats on the market , Rendering with borders / Meta rendering / Mapping function / Light, shadow, etc .

I'll take it back
This is a series of , So I hope readers can read step by step . Here is a link to the previous issue :
use JS Build from scratch 3D Rendering engine ( One )
Last time we demo We've come to a wireframe render , It can describe our specific model ( from 8 A little bit , 12 A cube of faces ) The projected image of the wireframe is rendered in canvas On . And support rotation motion and different projection perspective effect . Intuitively, that's it :
Wireframe is really bad , We can't even tell which points are ahead and which are behind , It can't represent the occlusion relationship between point and face ! Therefore, the theme of this section is to achieve meta rendering , Let's have our renderers demo
The expressiveness of the company goes up to a new level .
Chip shader
" fragment " It's a proper noun , It roughly refers to the smallest graphics unit connected by the vertices that have been converted into the window coordinates . " Chip shader " It's also a proper noun , But shaders don't just deal with the color filling of slices / Occlusion relationship between patches , It also includes color and texture interpolation, and even lighting effects , What we want to achieve in this section is actually " Chip shader " Part of the function , For other functions, let's not press the table first . Like us demo1.3
For the cube shown in , Each cube consists of two triangular faces , Altogether 12 Face to face , So every triangle is a piece .
Simply try
It doesn't sound hard either , It's just rendering borders , Now render the border with the fill color of the face . Have a look at MDN Documents (developer.mozilla.org/zh-CN/docs/…), canvas It happens that API, As long as beginPath
and closePath
In between lineto
Connect into a closed graph , Re execution fill
You can finish coloring , Is it really that simple ? Do what you say .
To meet demand , We will 1.3/src/render/canvas.js
Make the following changes to the code of , Let it draw the fill color as well as the wireframe ! Here readers can open 1.3 Make the following changes to the code in the folder and see the effect :
- hold
drawline
InsidebeginPath
andclosePath
Extract todrawline
external , This is convenient for the whole triangle to become a whole to fill - Set up
fillstyle
, And instroke()
Execute... After the callfill()
call , Fill color
How to change it has been marked in the following code block , If it's not clear , Can open 2.1
Folder to see .
class Canvas extends GObject {
// The unchanging method is ignored first
drawline(v1, v2, color) {
/**
* The change starts
*/
// console.log('drawline', v1, v2)
const ctx = this.ctx
// ctx.beginPath()
ctx.strokeStyle = color.toRgba()
// ctx.moveTo(v1.x, v1.y)
ctx.lineTo(v2.x, v2.y)
// ctx.closePath()
// ctx.stroke()
/**
* The change is over
*/
}
drawMesh(mesh, cameraIndex) {
const { indices, vertices } = mesh
const { w, h } = this
let { position, target, up } = Camera.new(cameraIndex || 0)
const view = Matrix.lookAtLH(position, target, up)
const projection = Matrix.perspectiveFovLH(8, w / h, 0.1, 1)
const rotation = Matrix.rotation(mesh.rotation)
const translation = Matrix.translation(mesh.position)
const world = rotation.multiply(translation)
const transform = world.multiply(view).multiply(projection)
// console.log('transform', transform, world, rotation, translation)
const ctx = this.ctx
const color = Color.blue()
indices.forEach(ind => {
const [v1, v2, v3] = ind.map(i => {
return this.project(vertices[i], transform).position
})
/**
* The change starts
*/
ctx.beginPath()
ctx.moveTo(v1.x, v1.y)
this.drawline(v1, v2, color)
this.drawline(v2, v3, color)
this.drawline(v3, v1, color)
ctx.fillStyle = Color.green().toRgba()
ctx.closePath()
ctx.fill()
ctx.stroke()
/**
* The change is over
*/
})
}
}
Copy code
If you change it right, you will find that , There's a blue wire frame , There's a green surface , Everything is so beautiful !

But it's not as simple as you think , Still pictures are still , Once the graph rotates , I'll see the big story right away !

Now that we've found all the ducklings , Believe smart readers, you've also found . Once the graph rotates , There will be a strange phenomenon of occlusion ! ( As shown in the figure below ) For some time , Some pieces will block other pieces more than expected .
So why is this ?
It's a block
A cube has 12
A piece of money , The rendering between each element is independent , Then there will be a phenomenon , The element drawn first will be covered by the element drawn later . In fact, we define the sequence of element drawing, which is meaningless , So there's a weird scene in the picture above . So we need to think about what we really want ? To help you think , Let me give you some chestnuts .
z
Big colors block z Colors with small values , We use the right-handed coordinate system ( If you are not clear, please read the last one ),z
The positive axis points out of the paper , therefore z The point with large value is closer to the camera , This should be easy to understand , Objects near block objects far away- There may be
a
The chip only coversb
Part of the chip , Not the whole movie . abc
It's possible for the three elements to block each other circularly , Part of each .
If a piece of money a
and b
The three vertices of the two are a1
, a2
, a3
and b1
, b2
, b3
, If a
All ratio b
Of z
Large value , So easy to handle , Just draw the slice first b And then draw the slice a
, that a
It will b
Keep out of the way .
It's natural , Let's first draw the far element and then draw the near element , In this way, we can use the sequence of drawing to realize the occlusion relationship . But it's not easy , We don't just have to meet the needs 1 And meet the needs 2 and 3. demand 3 It's a bit awkward. , Here, I use three pieces of paper to make an appearance to help you understand , Here's the picture .

demand 3
It's actually demand 2
A special case of , Just to help you understand more intuitively . You say in this case ab
c What's better if you draw three pieces first ? No matter which one you draw first , Can't meet our needs . So this scheme of drawing order GG. This scheme GG What is the essence of ?
I don't think the scheme of drawing order can meet the demand in any case. The key is , The smallest unit of occlusion relation is not a slice, but a pixel . So no matter how the programmer adjusts the code , As long as he takes the drawing element as the smallest unit , Then there is no solution to this problem . therefore , What we're looking for is to manipulate pixels API
Instead of drawing API
( Drawing graphics API
They are all based on graphics as the smallest unit ).
Manipulating pixels
Keep going through MDN file , canvas It also provides such a low-level API To do pixel level operations (developer.mozilla.org/zh-CN/docs/…).
One of the most important parameters imageData
The data format is shown in the document (developer.mozilla.org/zh-CN/docs/…). In addition, you can also use getImageData
Interface to get imageData
. This new API
Compared to the bottom, more abstract , Less commonly used , So let's practice using it first .
Well, since this API
So strong , Then the small goal of our practice is to use this API
Express 256*256*256
It's a color gradient process . RGB
There were 3
A degree of freedom , Plane XY
Coordinates can cover two of these degrees of freedom , There's another degree of freedom, just cover it with time . Readers can think about how to achieve .
const main = () => {
const c = document.querySelector('#canvas')
const ctx = c.getContext('2d')
const w = c.width
const h = c.height
let d = new Uint8ClampedArray(w * h * 4)
const getData = t => {
for (let i = 0; i < h; i++) {
for (let j = 0; j < w; j++) {
d[i * 4 * w + j * 4 + 0] = 255 / w * j
d[i * 4 * w + j * 4 + 1] = 255 / h * i
d[i * 4 * w + j * 4 + 2] = Math.abs(t - 255)
d[i * 4 * w + j * 4 + 3] = 255
}
}
const data = new ImageData(d, w, h)
return data
}
let time = 0
setInterval(() => {
ctx.putImageData(getData(time), 0, 0)
time = (time + 1) % 512
}, 10)
}
main()
Copy code
With only 20 lines of code, a beautiful color square appears , Maybe that's the beauty of the program ! For specific code, please refer to demo2.2.
But because the color of every dot in this picture is different , The traditional compression method will greatly reduce its quality , So it doesn't look as good as the program . ( It's a pity that the )
Here, with this demo
Let's talk about imageData
Data format , ImageData
There are three construction parameters of , data, width, height
. among data
The length of is width
and height
The product of 4
times , data
Each row of pixels from top left to bottom right is stored in order 4
The number of channels , RGBA4
A channel range from 0
To 255
, 2
Of 8
To the power of 256
in other words canvas
Inside is RGBA4
passageway 8
Bit deep .
For example width
by 10
, height
by 10
, Then the first row of pixels is named dot 0
point-to-point 9
, be data
An array of former 4
Bit control points 0
Of R
value G
value B
value A
value , front 40
Position as [R0, G0, B0, A0, .... , R9, G9, B9, A9]
.
Look for the , This square will change color . Believe from this demo You can feel this API The power of , Think of this drawing requirement , If you use the previous drawline Of API Even if it can be achieved , The performance will be greatly reduced . Looking at this picture, I can't help thinking of the mascot of the Youth Olympic Games , Colorful kidney ....

Deep buffer
Now that we have the ability to manipulate every pixel on the canvas RGBA
Value ability , Back to our needs . When we draw slices in order, we can know which pixels the slices cover , We also know the color values of these pixels , besides , We also need to get the values of these pixels Z
value , So that when you draw other tiles later, if the pixels collide ( The drawing of two pieces needs to color the same pixel ) It is easy to judge the occlusion relationship between the two, and then decide to keep one side or mix colors with some algorithm ( For example, when the color of the nearby element is translucent ).
That is to say, we can't draw a piece and immediately paint its color on the canvas , Because maybe this color should be covered when drawing other near elements later , Therefore, we need a temporary storage area to store not only the color values of all the points but also the color values of the points Z value , Convenient for depth comparison . This kind of temporary storage area can be called fragment drawing buffer , When all the pieces are finished, this area can be applied to the canvas , And clear the variable . It is worth mentioning that , General 3D rendering engine for efficiency and space , The depth value is also limited by digits and precision , When the accuracy is not enough , When two objects are close in depth, there will be depth conflict , The expression is the flicker of some surfaces / Wear the mold .
Use buffers to draw
Our realization idea is , First, don't fill the canvas when the element is drawn , It's initialization first dataBuffer
Variables and depthBuffer
Variable , Push the color value of the point into , And the point of Z
Value push in depthBuffer
Variable , Then contrast before pushing in the color Z
value , take Z The big one pushes in dataBuffer
, And so on to make sure buffer
All the pixels in are Z
The most valuable ones remain . Until all the colors are pushed in , Will dataBuffer
Applied to the canvas
On .
The changes here are quite big , We need to change the previous implementation of drawing lines and faces to meet this requirement . As follows :
class Canvas extends GObject {
constructor(canvas) {
super(canvas)
this.canvas = canvas
this.ctx = canvas.getContext('2d')
this.w = canvas.width
this.h = canvas.height
// Initialize and add the following two lines
this.dataBuffer = new Uint8ClampedArray(this.w * this.h * 4)
this.depthBuffer = new Array(this.w * this.h)
}
// The unchanging way is to ignore
drawline(v1, v2, color) {
// It's all going to change here , How to change it later
}
drawMesh(mesh, cameraIndex) {
// Matrix operations are invariant , I don't care , Omit
indices.forEach(ind => {
// It's all going to change here , How to change it later
})
// Add this sentence
ctx.putImageData(new ImageData(this.dataBuffer, this.w, this.h), 0, 0)
}
}
Copy code
Let's add dataBuffer
The initialization , And at the end of the drawing dataBuffer
Apply to canvas , So you'll find canvas
It's blank , because putImageData
It's an empty piece of data , Next we need to be in drawTriangle/drawLine
Change in the realization of this.dataBuffer
So that the image of the model returns to the canvas .
Re drawing points, lines and surfaces
In order to achieve this goal, we rewrite drawPoint
and drawLine
The implementation of the , Modify... In it dataBuffer
, And in drawMesh
At the beginning of the initBuffer
, Last putImageData
Implement drawing .
class Canvas extends GObject {
constructor(canvas) {
super(canvas)
this.canvas = canvas
this.ctx = canvas.getContext('2d')
this.w = canvas.width
this.h = canvas.height
// Initialize and add the following two lines
this.initBuffer()
}
initBuffer() {
this.dataBuffer = new Uint8ClampedArray(this.w * this.h * 4)
this.depthBuffer = Array.from({ length: this.w * this.h }).map(() => -255535)
}
drawPoint(v, color) {
const x = Math.round(v.x)
const y = Math.round(v.y)
const index = x + y * this.w
if (v.z > this.depthBuffer[index]) {
this.depthBuffer[index] = v.z
this.dataBuffer[index * 4 + 0] = color.r
this.dataBuffer[index * 4 + 1] = color.g
this.dataBuffer[index * 4 + 2] = color.b
this.dataBuffer[index * 4 + 3] = color.a
}
}
drawLine(v1, v2, color) {
const delta = v1.sub(v2)
const deltaX = Math.abs(delta.x)
const deltaY = Math.abs(delta.y)
const len = deltaX > deltaY ? deltaX : deltaY
for (let i = 0; i < len; i++) {
const p = v1.interpolate(v2, i / len)
this.drawPoint(p, color)
}
}
drawMesh(mesh, cameraIndex) {
this.initBuffer()
const { indices, vertices } = mesh
const transform = this.getTransform(mesh, cameraIndex)
const ctx = this.ctx
const color = Color.blue()
indices.forEach((ind, i) => {
const [v1, v2, v3] = ind.map(i => {
return this.project(vertices[i], transform).position
})
this.drawLine(v1, v2, color)
this.drawLine(v2, v3, color)
this.drawLine(v3, v1, color)
})
ctx.putImageData(new ImageData(this.dataBuffer, this.w, this.h), 0, 0)
}
}
Copy code
There are a lot of code changes here , Readers can open demo2.3
To see the code and how it works . Here you can see that the running effect is almost the same as demo1.3
The same , But because our needs are more complex , So use a more flexible way to render , It's very different . In the process , I hope not to throw out the final solution directly , It's based on the target requirements of each stage , Take the shortest path to achieve , The final requirement upgrade will adopt more complex solutions to meet more complex requirements , In the process, explore and accomplish the goal with the readers , After all, it's closer to our daily development process .
Draw a slice
Just drawing wireframe can't show the advantage of buffer drawing , Next we're going to start to draw pieces ! How about yuan , In our definition here, it's a triangle , Now all we have to do is find all the points inside the triangle , And call drawPoint
Color them all .
There are many ways to find all the interior points , Readers can also make different attempts . Take a chestnut , Method one can put the triangle along y
Axis values are cut into multiple high 1px
Horizontal strip of , Method two can also be used in BC
Take some from the side D
And connect AD
Line segment , With D
stay AB
Sliding on the wall , AD
It will go through all the points inside to complete the scanning . What other scanning and cutting methods are more efficient? You can discuss them in the comment area .
Here we take the first method " Horizontal strip cutting ".
drawTriangle(v1, v2, v3, color) {
// Three vertices based on Y Value to sort
const [vUp, vMid, vDown] = [v1, v2, v3].sort((a, b) => a.y - b.y)
// vUp and vDown The connection is passed through vMid The point of horizontal line cutting , be called vMag
const vMag = vUp.interpolate(vDown, (vMid.y - vUp.y) / (vDown.y - vUp.y))
for (let y = vUp.y; y < vDown.y; y++) {
if (y < vMid.y) {
// The upper part of the triangle
const vUpMid = vUp.interpolate(vMid, (y - vUp.y) / (vMid.y - vUp.y))
const vUpMag = vUp.interpolate(vMag, (y - vUp.y) / (vMag.y - vUp.y))
this.drawLine(vUpMid, vUpMag, color)
} else {
// The lower part of the triangle
const vDownMid = vDown.interpolate(vMid, (y - vDown.y) / (vMid.y - vDown.y))
const vDownMag = vDown.interpolate(vMag, (y - vDown.y) / (vMag.y - vDown.y))
this.drawLine(vDownMid, vDownMag, color)
}
}
}
Copy code
The logic here is not complicated , But it takes a little bit of Geometry , If it's difficult to understand, it's better to draw a picture to facilitate understanding . Here are three vertices based on Y
Value to sort , In turn vUp, vMid, vDown
. vUp
and vDown
The connection is passed through vMid
The point of horizontal line cutting , be called vMag
. So the triangle is divided into two , Namely vUp, vMid, vMag
, and vDown, vMid, vMag
. We call it the upper triangle and the lower triangle . Scan each with a horizontal line and drawLine
, The final color fill .
Deep conflict
Here we can see that compared to demo2.1
, The patch occlusion relationship here is correct . Carefully, the students will notice that there is a very uncomfortable phenomenon , The border line flickers . Why is this so ?
Because in reality there is no border line , And the way we draw border lines is actually to connect vertices , In this way, the border line will completely coincide with the edge of the element , So whose color should be presented when it completely overlaps ? It depends on the accuracy of the calculation , Some of the dots are in the front , Some of them are ahead , So the border line becomes a dotted line , Once it's spinning, it's going to flash .
Our requirements here are actually elements that we expect to draw together ( Slice or border line ) If z If the difference between the two values is small, it is either completely occluded or not occluded , Don't want to blink or break . So I did a simple treatment , In judging z Value to provide a threshold , So that the elements drawn first are not easily occluded , Of course, this is not the perfect solution , You can also discuss how to better solve it in the comments section .
drawPoint(v, color) {
const x = Math.round(v.x)
const y = Math.round(v.y)
const index = x + y * this.w
// The magic number here is one of the ways to solve deep conflicts
if (v.z > this.depthBuffer[index] + 0.0005) {
this.depthBuffer[index] = v.z
this.dataBuffer[index * 4 + 0] = color.r
this.dataBuffer[index * 4 + 1] = color.g
this.dataBuffer[index * 4 + 2] = color.b
this.dataBuffer[index * 4 + 3] = color.a
}
}
Copy code
Last , Let's show you how to solve the deep conflict demo
The effect . ( Can be in github
Warehouse 2.4demo
Look at the code and how it works )
So far, we have completed a simple implementation of the chip shader , There's a lot of detail idealized here , For example, the color of the whole chip is uniform , In reality, it's mainly filling with maps , In this case, color selection and color interpolation are needed . So it's not a complete shader yet , These we will explore and learn together in the next chapters of this series , Look forward to it .
Summary
To summarize this section , This section is based on the previous wireframe render demo Try to color fill the slice , In this process, a simple attempt found that the occlusion relationship can not be correctly expressed , After thinking about the nature of the problem, we found the way to manipulate pixels API, And use buffer and depth buffer to deal with occlusion relationship , Finally, we finished the simple rendering of the slice .
Now that I see this , Why don't you get up and turn on the computer and face this github Warehouse clone , Turn paper learning into bending practice , I believe there will be better learning effect .
github Warehouse address :github.com/ShaojieLiu/…
Because of the limitation of space , It's almost the end of this section . It's actually slower than I expected , There's a lot to talk about next , 3D
File data format analysis / Mapping / light wait . Through the last interaction with readers in the comment area, I found that many things were not detailed and thorough enough before , So this time it slowed down , Including manipulation elements can also take a demo
To explain and demonstrate . I hope it can help you .
Is this article helpful to you ? Whether you know it for a long time , Or you can see it at a glance , Or in the clouds, or in the horse first, then in the eye , Welcome to like collection and pay attention to , Thank you all . Welcome to discuss and correct if you are not strict , thank .