DETAILED ACTION
1. The Office Action is in response to Application 18959913 filed on 02/28/2025. Claims 2-21 are pending.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
3. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
4. Claim 2-4, 6, 8 are rejected on the ground of non-statutory obviousness-type double patenting as being unpatentable over claim 1, 4, 6, 10 of US Patent US 12184869 indicated below.
For Claim 2-4, 6, 8, although the conflicting claims are not identical, they both are dealing with method for encoding an extended-reality (XR) video frame. As clearly indicated in the table below, each claimed limitations of claim 2-4, 6, 8 of the current application are anticipated by the corresponding limitations of claim 1, 4, 6, 10 of the reference patent.
.
Current Application
US 12184869
Claim 2:
A method for encoding an extended-reality (XR) video frame, comprising:
obtaining an XR video frame comprising a background image and a virtual object overlaying at least a portion of the background image;
dividing the XR video frame into a virtual region and a real region, wherein the virtual region comprises at least a portion of the virtual object, and wherein the real region comprises a region of the background image separate from the virtual region;
determining, for the virtual region, a first complexity criterion associated with virtual regions;
determining, for the real region, a second complexity criterion associated with real regions;
and encoding the: virtual region based at least in part on a first quantization parameter associated with the first complexity criterion, and real region based at least in part on a second quantization parameter associated with the second complexity criterion.
claim 3’s limitation:
obtaining an input indicative of an area of focus via a gaze-tracking user interface, wherein dividing the XR video frame is based at least in part on the area of focus.
Claim 4’s limitation:
wherein the first quantization parameter is based at least in part on a first initial quantization parameter and the first initial quantization parameter corresponds to a complexity associated with a reference virtual region.
claim 6’s limitation:
wherein the second quantization parameter is based at least in part on a second initial quantization parameter and the second initial quantization parameter correspond to a complexity associated with a reference real region.
claim 8’s limitation:
wherein a first initial quantization parameter associated with a reference virtual region is smaller than the second initial quantization parameter associated with reference real region
Claim 1
A method for encoding an extended-reality (XR) video frame, comprising:
obtaining an XR video frame comprising a background image and a virtual object overlaying at least a portion of the background image;
dividing the XR video frame into a first region comprising virtual content and a second region comprising real content within a physical environment, and a third region comprising virtual content and real content, wherein the first region comprises at least a portion of the virtual object, the second region comprises a region of the background image separate from the first region, and the third region comprises a portion of the virtual object and a portion of the background image;
claim 4’s limitations:
the first region comprises at least a portion of the virtual object that satisfies a first complexity criterion; the second region comprises a first portion of the second region of the background image separate from the first region, wherein the first portion of the second region fails to satisfy a second complexity criterion; and the third region comprises at least one of: (i) a portion of the at least one virtual object that fails to satisfy the first complexity criterion, and (ii) a second portion of the second region of the background image separate from the first region, wherein the second portion of the second region satisfies the second complexity criterion
determining, for the first region, a corresponding first quantization parameter based on an initial quantization parameter associated with virtual regions;
determining, for the second region, a corresponding second quantization parameter based on an initial quantization parameter associated with real regions;
determining, for the third region, a corresponding third quantization parameter based on an initial quantization parameter associated with medial regions;
and encoding the first region based on the corresponding first quantization parameter, the second region based on the corresponding second quantization parameter, and the third region based on the corresponding third quantization parameter
claim 6’s limitation:
further comprising obtaining an input indicative of an area of focus via a gaze-tracking user interface, wherein dividing the XR video frame is based at least in part on the area of focus
claim 1’s limitation:
determining, for the first region, a corresponding first quantization parameter based on an initial quantization parameter associated with virtual regions; determining, for the second region, a corresponding second quantization parameter based on an initial quantization parameter associated with real regions
claim 1’s limitation:
determining, for the second region, a corresponding second quantization parameter based on an initial quantization parameter associated with real regions;
claim 10’s limitation:
wherein the initial quantization parameter associated with virtual regions is smaller than the initial quantization parameter associated with real regions.
5. Claim 9-11, 13, 15 are rejected on the ground of non-statutory obviousness-type double patenting as being unpatentable over claim 7, 12-13, 18 of US Patent US 12184869 indicated below.
For Claim 9-11, 13, 15, although the conflicting claims are not identical, they both are dealing with non-transitory computer readable medium. As clearly indicated in the table below, each claimed limitations of claim 9-11, 13, 15 of the current application are anticipated by the corresponding limitations of claim 7, 12-13, 18 of the reference patent.
.
Current Application
US 12184869
Claim 9:
A non-transitory computer readable medium, comprising computer code executable by at least one processor to:
obtain an XR video frame comprising a background image and a virtual object overlaying at least a portion of the background image;
divide the XR video frame into a virtual region and a real region, wherein the virtual region comprises at least a portion of the virtual object, and wherein the real region comprises a region of the background image separate from the virtual region;
determine, for the virtual region, a first complexity criterion associated with virtual regions;
determine, for the real region, a second complexity criterion associated with real regions;
and encode the: virtual region based at least in part on a first quantization parameter associated with the first complexity criterion, and real region based at least in part on a second quantization parameter associated with the second complexity criterion.
claim 10’s limitation:
obtain an input indicative of an area of focus via a gaze-tracking user interface, wherein dividing the XR video frame is based at least in part on the area of focus..
Claim 11’s limitation:
wherein the first quantization parameter is based at least in part on a first initial quantization parameter and the first initial quantization parameter corresponds to a complexity associated with a reference virtual region.
claim 13’s limitation:
wherein the second quantization parameter is based at least in part on a second initial quantization parameter and the second initial quantization parameter correspond to a complexity associated with a reference real region
claim 15’s limitation:
wherein a first initial quantization parameter associated with a reference virtual region is smaller than the second initial quantization parameter associated with reference real region
Claim 7
A non-transitory computer readable medium, comprising computer code executable by at least one processor to:
obtain an extended reality (XR) video frame comprising a background image and a virtual object overlaying at least a portion of the background image;
divide the XR video frame into a first region comprising virtual content, a second region comprising real content within a physical environment, and a third region comprising virtual content and real content, wherein the first region comprises at least a portion of the virtual object, the second region comprises a region of the background image separate from the first region, and the third region comprises a portion of the virtual object and a portion of the background image;
claim 13’s limitations:
the first region comprises at least a portion of the virtual object that satisfies a first complexity criterion; the second region comprises a first portion of the second region of the background image separate from the first region, wherein the first portion of the second region fails to satisfy a second complexity criterion; and the third region comprises at least one of: (i) a portion of the virtual object that fails to satisfy the first complexity criterion, and (ii) a second portion of the second region of the background image separate from the first region, wherein the second portion of the second region satisfies the second complexity criterion
determine, for the first region, a corresponding first quantization parameter based on an initial quantization parameter associated with virtual regions;
determine, for the second region, a corresponding second quantization parameter based on an initial quantization parameter associated with real regions;
determine, for the third region, a corresponding third quantization parameter based on an initial quantization parameter associated with medial regions;
and encode the first region based on the corresponding first quantization parameter, the second region based on the corresponding second quantization parameter, and the third region based on the corresponding third quantization parameter
claim 18’s limitation:
obtain an input indicative of an area of focus via a gaze-tracking user interface, wherein the computer readable code to divide the XR video frame further comprises computer readable code to divide the XR video frame based at least in part on the area of focus
claim 7’s limitation:
determine, for the first region, a corresponding first quantization parameter based on an initial quantization parameter associated with virtual regions; determine, for the second region, a corresponding second quantization parameter based on an initial quantization parameter associated with real region
claim 7’s limitation:
determine, for the first region, a corresponding first quantization parameter based on an initial quantization parameter associated with virtual regions; determine, for the second region, a corresponding second quantization parameter based on an initial quantization parameter associated with real region
claim 12’s limitation:
wherein the initial region size associated with virtual regions is smaller than the initial region size associated with real regions
6. Claim 16-18, 20, are rejected on the ground of non-statutory obviousness-type double patenting as being unpatentable over claim 1, 4, 6, 10 of US Patent US 12184869 indicated below.
For Claim 16-18, 20, although the conflicting claims are not identical, they both are dealing with device for encoding an extended-reality (XR) video frame. As clearly indicated in the table below, each claimed limitations of claim 16-18, 20 of the current application are anticipated by the corresponding limitations of claim 19, 4, 6, of the reference patent.
.
Current Application
US 12184869
Claim 16:
A device comprising: an image capturing device configured to capture a background image; at least one processor; and at least one computer readable media comprising computer readable code executable by the at least one processor to:
obtaining an XR video frame comprising a background image and a virtual object overlaying at least a portion of the background image;
dividing the XR video frame into a virtual region and a real region, wherein the virtual region comprises at least a portion of the virtual object, and wherein the real region comprises a region of the background image separate from the virtual region;
determining, for the virtual region, a first complexity criterion associated with virtual regions;
determining, for the real region, a second complexity criterion associated with real regions;
and encoding the: virtual region based at least in part on a first quantization parameter associated with the first complexity criterion, and real region based at least in part on a second quantization parameter associated with the second complexity criterion.
claim 17’s limitation:
obtaining an input indicative of an area of focus via a gaze-tracking user interface, wherein dividing the XR video frame is based at least in part on the area of focus.
Claim 18’s limitation:
wherein the first quantization parameter is based at least in part on a first initial quantization parameter and the first initial quantization parameter corresponds to a complexity associated with a reference virtual region.
claim 20’s limitation:
wherein the second quantization parameter is based at least in part on a second initial quantization parameter and the second initial quantization parameter correspond to a complexity associated with a reference real region.
Claim 19
A device, comprising: an image capturing device configured to capture a background image; at least one processor; and at least one computer readable media comprising computer readable code executable by the at least one processor to:
obtaining an XR video frame comprising a background image and a virtual object overlaying at least a portion of the background image;
dividing the XR video frame into a first region comprising virtual content and a second region comprising real content within a physical environment, and a third region comprising virtual content and real content, wherein the first region comprises at least a portion of the virtual object, the second region comprises a region of the background image separate from the first region, and the third region comprises a portion of the virtual object and a portion of the background image;
claim 4’s limitations:
the first region comprises at least a portion of the virtual object that satisfies a first complexity criterion; the second region comprises a first portion of the second region of the background image separate from the first region, wherein the first portion of the second region fails to satisfy a second complexity criterion; and the third region comprises at least one of: (i) a portion of the at least one virtual object that fails to satisfy the first complexity criterion, and (ii) a second portion of the second region of the background image separate from the first region, wherein the second portion of the second region satisfies the second complexity criterion
determining, for the first region, a corresponding first quantization parameter based on an initial quantization parameter associated with virtual regions;
determining, for the second region, a corresponding second quantization parameter based on an initial quantization parameter associated with real regions;
determining, for the third region, a corresponding third quantization parameter based on an initial quantization parameter associated with medial regions;
and encoding the first region based on the corresponding first quantization parameter, the second region based on the corresponding second quantization parameter, and the third region based on the corresponding third quantization parameter
claim 6’s limitation:
further comprising obtaining an input indicative of an area of focus via a gaze-tracking user interface, wherein dividing the XR video frame is based at least in part on the area of focus
claim 19’s limitation:
determining, for the first region, a corresponding first quantization parameter based on an initial quantization parameter associated with virtual regions; determining, for the second region, a corresponding second quantization parameter based on an initial quantization parameter associated with real regions
claim 19’s limitation:
determining, for the second region, a corresponding second quantization parameter based on an initial quantization parameter associated with real regions;
Claim Rejections - 35 USC § 112
7. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
8. Claim 2 and its dependent claims 3-8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
For claim 2, it recites limitations of “obtaining an XR video frame comprising a background image and a virtual object overlaying at least a portion of the background image” first, then it recites limitations of: “dividing the XR video frame into a virtual region and a real region, wherein the virtual region comprises at least a portion of the virtual object, and wherein the real region comprises a region of the background image separate from the virtual region”; However, it is not clear how can the real region separated from the virtual region, since the virtual object (where the virtual region comes from) overlaying at least a portion of the background image (where the real region comes from)?
Thus the scope of the claim and its dependent claim 3-8 are unclear.
9. Claim 9 and its dependent claims 10-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention for the similar reason as for Claim 2 and its dependent claims 3-8 .
10. Claim 16 and its dependent claims 17-21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention for the similar reason as for Claim 2 and its dependent claims 3-8 .
Claim Rejections - 35 USC § 103
11. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
12. Claims 2, 4-9, 11-16, 18-21 are rejected are rejected under 35 U.S.C. 103 as being unpatentable over TIAN et al. ( CN 108540801) and in view of CHENG et al. (CN 104239271).
Regarding claim 2, TIAN teaches a method for encoding an extended-reality (XR) video frame (fig. 1 -fig. 3; abstract, … a ROI coding method of virtual reality wireless transmission, wherein the invention method comprises ROI using method, region division, QP value is set to three steps. the virtual reality content image compressed into a designated format (H.264 /H.265) by the PC end encoder; virtual reality belongs to extended-reality, according to attached Wikipedia file), , comprising:
obtaining an XR video frame (fig. 1-fig. 3) comprising a background image (as shown in fig. 1, the white/black blocks, are background image) and a virtual object (fig. 1, the gate is a virtual object) on at least a portion of the background image (as shown in fig. 1-fig. 3);
dividing the XR video frame into a virtual region (fig. 1-3, the region has the gate is virtual region) and a real region (fig. 1-3, the region has the black/white blocks and the ceiling and the wall), wherein the virtual region comprises at least a portion of the virtual object (as shown in fig. 1-3), and wherein the real region comprises a region of the background image separate from the virtual region (as shown in fig. 1-3);
determining, for the virtual region, a first complexity criterion associated with virtual regions (as shown in fig. 3, center region/left region/right region/top region/bottom region has different complexities, which is the sensitivity of human being’s eyes; the complexity criterion depends on human being’s eye visual sensitivity; page 3, … human eyes to Center (central region) region of the image the highest sensitivity, lowest visual sensitivity to the edge of the image, using the ROI coding technology to different areas);
determining, for the real region, a second complexity criterion associated with real regions (fig. 3, the region has the black/white blocks and the ceiling and the wall is real region and since human being’s visual sensitivity is the first complexity criterion, the distance to the center of human being’s visual sensitivity is the second complexity criterion);
and encoding the virtual region (abstract, … region division, QP value is set to three steps. the virtual reality content image compressed into a designated format (H.264 /H.265) by the PC end encoder…, using the ROI coding techniques, the area encoding parameter set, encoding bit rate is effectively reduced under the condition of not influencing the viewing effect) based at least in part on a first quantization parameter associated with the first complexity criterion regions (as shown in fig. 3; page 4-5, …firstly dividing the original image into Top (top), syndromes (bottom), Left (left side), Right (right) Left-Top (left upper corner), Right-Top (right), Left-Bottom (left), Right-Bottom (right), Center (center area) nine regions… using the ROI coding technology to different areas provided with different encoding quantization step size (QP)), and real region based at least in part on a second quantization parameter associated with the second complexity criterion (as shown in fig. 3, page 4-5, …firstly dividing the original image into Top (top), syndromes (bottom), Left (left side), Right (right) Left-Top (left upper corner), Right-Top (right), Left-Bottom (left), Right-Bottom (right), Center (center area) nine regions… using the ROI coding technology to different areas provided with different encoding quantization step size (QP)… we then adopts three-step QP is set inside the edge closer to the edge QP value is larger, so as to reduce the data processing module to perform video encoding and reducing the transmission delay of the whole system) .
It is noticed that TIAN does not disclose explicitly of a virtual object overlaying at least a portion of the background image.
CHENG discloses of a virtual object overlaying at least a portion of the background image (paragraph 0163… the virtual object according to the undershoot amount superimposed on background image).
It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that a virtual object overlaying at least a portion of the background image as a modification to the method for the benefit of that finally the synthetic image to the system after overlapping so it can realize the virtual target (paragraph 0163).
Regarding claim 9, TIAN teaches a non-transitory computer readable medium, comprising computer code executable by at least one processor (page 2, … uses computer simulation to generate the virtual world in a three-dimension space, provided to a user regarding simulation of visual sense; in which a computer has a non-transitory computer readable medium, comprising computer code executable by at least one processor), to obtain an extended-reality (XR) video frame (fig. 1 -fig. 3; abstract, … a ROI coding method of virtual reality wireless transmission, wherein the invention method comprises ROI using method, region division, QP value is set to three steps. the virtual reality content image compressed into a designated format (H.264 /H.265) by the PC end encoder; virtual reality belongs to extended-reality, according to attached Wikipedia file)), , comprising:
obtaining an XR video frame (fig. 1-fig. 3) comprising a background image (as shown in fig. 1, the white/black blocks, are background image) and a virtual object (fig. 1, the gate is a virtual object) on at least a portion of the background image (as shown in fig. 1-fig. 3);
dividing the XR video frame into a virtual region (fig. 1-3, the region has the gate is virtual region) and a real region (fig. 1-3, the region has the black/white blocks and the ceiling and the wall), wherein the virtual region comprises at least a portion of the virtual object (as shown in fig. 1-3), and wherein the real region comprises a region of the background image separate from the virtual region (as shown in fig. 1-3);
determining, for the virtual region, a first complexity criterion associated with virtual regions (as shown in fig. 3, center region/left region/right region/top region/bottom region has different complexities, which is the sensitivity of human being’s eyes; the complexity criterion depends on human being’s eye visual sensitivity; page 3, … human eyes to Center (central region) region of the image the highest sensitivity, lowest visual sensitivity to the edge of the image, using the ROI coding technology to different areas);
determining, for the real region, a second complexity criterion associated with real regions (fig. 3, the region has the black/white blocks and the ceiling and the wall is real region and since human being’s visual sensitivity is the first complexity criterion, the distance to the center of human being’s visual sensitivity is the second complexity criterion);
and encoding the virtual region (abstract, … region division, QP value is set to three steps. the virtual reality content image compressed into a designated format (H.264 /H.265) by the PC end encoder…, using the ROI coding techniques, the area encoding parameter set, encoding bit rate is effectively reduced under the condition of not influencing the viewing effect) based at least in part on a first quantization parameter associated with the first complexity criterion regions (as shown in fig. 3; page 4-5, …firstly dividing the original image into Top (top), syndromes (bottom), Left (left side), Right (right) Left-Top (left upper corner), Right-Top (right), Left-Bottom (left), Right-Bottom (right), Center (center area) nine regions… using the ROI coding technology to different areas provided with different encoding quantization step size (QP)), and real region based at least in part on a second quantization parameter associated with the second complexity criterion (as shown in fig. 3, page 4-5, …firstly dividing the original image into Top (top), syndromes (bottom), Left (left side), Right (right) Left-Top (left upper corner), Right-Top (right), Left-Bottom (left), Right-Bottom (right), Center (center area) nine regions… using the ROI coding technology to different areas provided with different encoding quantization step size (QP)… we then adopts three-step QP is set inside the edge closer to the edge QP value is larger, so as to reduce the data processing module to perform video encoding and reducing the transmission delay of the whole system) ;
It is noticed that TIAN does not disclose explicitly of a virtual object overlaying at least a portion of the background image.
CHENG discloses of a virtual object overlaying at least a portion of the background image (paragraph 0163… the virtual object according to the undershoot amount superimposed on background image).
It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that a virtual object overlaying at least a portion of the background image as a modification to the non-transitory computer readable medium for the benefit of that finally the synthetic image to the system after overlapping so it can realize the virtual target (paragraph 0163).
Regarding claim 16, TIAN teaches a device, comprising at least one processor (page 2, … uses computer simulation to generate the virtual world in a three-dimension space, provided to a user regarding simulation of visual sense; in which a computer has a one processor) ,to obtain an extended-reality (XR) video frame (fig. 1 -fig. 3; abstract, … a ROI coding method of virtual reality wireless transmission, wherein the invention method comprises ROI using method, region division, QP value is set to three steps. the virtual reality content image compressed into a designated format (H.264 /H.265) by the PC end encoder; virtual reality belongs to extended-reality, according to attached Wikipedia file)), , comprising:
obtaining an XR video frame (fig. 1-fig. 3) comprising a background image (as shown in fig. 1, the white/black blocks, are background image) and a virtual object (fig. 1, the gate is a virtual object) on at least a portion of the background image (as shown in fig. 1-fig. 3);
dividing the XR video frame into a virtual region (fig. 1-3, the region has the gate is virtual region) and a real region (fig. 1-3, the region has the black/white blocks and the ceiling and the wall), wherein the virtual region comprises at least a portion of the virtual object (as shown in fig. 1-3), and wherein the real region comprises a region of the background image separate from the virtual region (as shown in fig. 1-3);
determining, for the virtual region, a first complexity criterion associated with virtual regions (as shown in fig. 3, center region/left region/right region/top region/bottom region has different complexities, which is the sensitivity of human being’s eyes; the complexity criterion depends on human being’s eye visual sensitivity; page 3, … human eyes to Center (central region) region of the image the highest sensitivity, lowest visual sensitivity to the edge of the image, using the ROI coding technology to different areas);
determining, for the real region, a second complexity criterion associated with real regions (fig. 3, the region has the black/white blocks and the ceiling and the wall is real region and since human being’s visual sensitivity is the first complexity criterion, the distance to the center of human being’s visual sensitivity is the second complexity criterion);
and encoding the virtual region (abstract, … region division, QP value is set to three steps. the virtual reality content image compressed into a designated format (H.264 /H.265) by the PC end encoder…, using the ROI coding techniques, the area encoding parameter set, encoding bit rate is effectively reduced under the condition of not influencing the viewing effect) based at least in part on a first quantization parameter associated with the first complexity criterion regions (as shown in fig. 3; page 4-5, …firstly dividing the original image into Top (top), syndromes (bottom), Left (left side), Right (right) Left-Top (left upper corner), Right-Top (right), Left-Bottom (left), Right-Bottom (right), Center (center area) nine regions… using the ROI coding technology to different areas provided with different encoding quantization step size (QP)), and real region based at least in part on a second quantization parameter associated with the second complexity criterion (as shown in fig. 3, page 4-5, …firstly dividing the original image into Top (top), syndromes (bottom), Left (left side), Right (right) Left-Top (left upper corner), Right-Top (right), Left-Bottom (left), Right-Bottom (right), Center (center area) nine regions… using the ROI coding technology to different areas provided with different encoding quantization step size (QP)… we then adopts three-step QP is set inside the edge closer to the edge QP value is larger, so as to reduce the data processing module to perform video encoding and reducing the transmission delay of the whole system) ;
It is noticed that TIAN does not disclose explicitly of an image capturing device configured to capture a background image; a virtual object overlaying at least a portion of the background image.
CHENG discloses of an image capturing device configured to capture a background image (paragraph 0138, … real-time camera data from the SFP optical fiber interface; paragraph 0153, … is the background image camera)
a virtual object overlaying at least a portion of the background image (paragraph 0163… the virtual object according to the undershoot amount superimposed on background image).
It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that an image capturing device configured to capture a background image and a virtual object overlaying at least a portion of the background image as a modification to the non-transitory computer readable medium for the benefit of that finally the synthetic image to the system after overlapping so it can realize the virtual target (paragraph 0163).
Regarding claim 4, the combination of TIAN, CHENG teaches the limitations recited in claim 2 as discussed above. In addition, TIAN further discloses that wherein the first quantization parameter is based at least in part on a first initial quantization parameter and the first initial quantization parameter corresponds to a complexity associated with a reference virtual region (as shown in fig. 3, the first initial quantization parameter is initial QP corresponds to a complexity (visual sensitivity) associated with a reference virtual region (the gate)).
Regarding claim 5, the combination of TIAN, CHENG teaches the limitations recited in claim 4 as discussed above. In addition, TIAN further discloses that adjusting the first initial quantization parameter by a proportional amount in response to determining that the complexity is greater than the first complexity associated with the reference virtual region (fig. 3, the QP (first initial quantization parameter) is adjusted by a proportional amount in response to determining that the complexity (visual sensitivity) is greater (the center region has the highest sensitivity ) than the first complexity associated with the reference virtual region (the bottom region, for example); page 5, … the macroblock closer to the edge, visual sensitivity of human eyes is lower. time in order to further refine the set of QP values).
Regarding claim 6, the combination of TIAN, CHENG teaches the limitations recited in claim 2 as discussed above. In addition, TIAN further discloses that the second quantization parameter is based at least in part on a second initial quantization parameter and the second initial quantization parameter correspond to a complexity associated with a reference real region (as shown in fig. 3, the second initial quantization parameter is initial QP corresponds to a complexity (visual sensitivity) associated with a reference real region (the region has the black/white blocks)).
Regarding claim 7, the combination of TIAN, CHENG teaches the limitations recited in claim 6 as discussed above. In addition, TIAN further discloses that adjusting the second initial quantization parameter by a proportional amount in response to determining that the complexity is less than the complexity associated with the reference real region (fig. 3, the Quantization step of top region (second initial quantization parameter) is adjusted by a proportional amount in response to determining that the complexity (visual sensitivity) is less (the center region has the highest sensitivity ) than the complexity associated with the reference real region (the goal in the center region); page 5, … the macroblock closer to the edge, visual sensitivity of human eyes is lower. time in order to further refine the set of QP values).
Regarding claim 8, the combination of TIAN, CHENG teaches the limitations recited in claim 7 as discussed above. In addition, TIAN further discloses that a first initial quantization parameter associated with a reference virtual region (which is the QP of the left region, for example) is smaller than the second initial quantization parameter associated with reference real region (which is QP of the goal in center region, has the highest sensitivity).
Regarding claim 11, the combination of TIAN, CHENG teaches the limitations recited in claim 9 as discussed above. In addition, TIAN further discloses that wherein the first quantization parameter is based at least in part on a first initial quantization parameter and the first initial quantization parameter corresponds to a complexity associated with a reference virtual region (as shown in fig. 3, the first initial quantization parameter is initial QP corresponds to a complexity (visual sensitivity) associated with a reference virtual region (the gate)).
Regarding claim 12, the combination of TIAN, CHENG teaches the limitations recited in claim 11 as discussed above. In addition, TIAN further discloses that adjusting the first initial quantization parameter by a proportional amount in response to determining that the complexity is greater than the first complexity associated with the reference virtual region (fig. 3, the QP (first initial quantization parameter) is adjusted by a proportional amount in response to determining that the complexity (visual sensitivity) is greater (the center region has the highest sensitivity ) than the first complexity associated with the reference virtual region (the bottom region, for example); page 5, … the macroblock closer to the edge, visual sensitivity of human eyes is lower. time in order to further refine the set of QP values).
Regarding claim 13, the combination of TIAN, CHENG teaches the limitations recited in claim 9 as discussed above. In addition, TIAN further discloses that the second quantization parameter is based at least in part on a second initial quantization parameter and the second initial quantization parameter correspond to a complexity associated with a reference real region (as shown in fig. 3, the second initial quantization parameter is initial QP corresponds to a complexity (visual sensitivity) associated with a reference real region (the region has the black/white blocks)).
Regarding claim 14, the combination of TIAN, CHENG teaches the limitations recited in claim 13 as discussed above. In addition, TIAN further discloses that adjusting the second initial quantization parameter by a proportional amount in response to determining that the complexity is less than the complexity associated with the reference real region (fig. 3, the Quantization step of top region (second initial quantization parameter) is adjusted by a proportional amount in response to determining that the complexity (visual sensitivity) is less (the center region has the highest sensitivity ) than the complexity associated with the reference real region (the goal in the center region); page 5, … the macroblock closer to the edge, visual sensitivity of human eyes is lower. time in order to further refine the set of QP values).
Regarding claim 15, the combination of TIAN, CHENG teaches the limitations recited in claim 14 as discussed above. In addition, TIAN further discloses that a first initial quantization parameter associated with a reference virtual region (which is the QP of the left region, for example) is smaller than the second initial quantization parameter associated with reference real region (which is QP of the goal in center region, has the highest sensitivity).
Regarding claim 18, the combination of TIAN, CHENG teaches the limitations recited in claim 16 as discussed above. In addition, TIAN further discloses that wherein the first quantization parameter is based at least in part on a first initial quantization parameter and the first initial quantization parameter corresponds to a complexity associated with a reference virtual region (as shown in fig. 3, the first initial quantization parameter is initial QP corresponds to a complexity (visual sensitivity) associated with a reference virtual region (the gate)).
Regarding claim 19, the combination of TIAN, CHENG teaches the limitations recited in claim 18 as discussed above. In addition, TIAN further discloses that adjusting the first initial quantization parameter by a proportional amount in response to determining that the complexity is greater than the first complexity associated with the reference virtual region (fig. 3, the QP (first initial quantization parameter) is adjusted by a proportional amount in response to determining that the complexity (visual sensitivity) is greater (the center region has the highest sensitivity ) than the first complexity associated with the reference virtual region (the bottom region, for example); page 5, … the macroblock closer to the edge, visual sensitivity of human eyes is lower. time in order to further refine the set of QP values).
Regarding claim 20, the combination of TIAN, CHENG teaches the limitations recited in claim 16 as discussed above. In addition, TIAN further discloses that the second quantization parameter is based at least in part on a second initial quantization parameter and the second initial quantization parameter correspond to a complexity associated with a reference real region (as shown in fig. 3, the second initial quantization parameter is initial QP corresponds to a complexity (visual sensitivity) associated with a reference real region (the region has the black/white blocks)).
Regarding claim 21, the combination of TIAN, CHENG teaches the limitations recited in claim 20 as discussed above. In addition, TIAN further discloses that adjusting the second initial quantization parameter by a proportional amount in response to determining that the complexity is less than the complexity associated with the reference real region (fig. 3, the Quantization step of top region (second initial quantization parameter) is adjusted by a proportional amount in response to determining that the complexity (visual sensitivity) is less (the center region has the highest sensitivity ) than the complexity associated with the reference real region (the goal in the center region); page 5, … the macroblock closer to the edge, visual sensitivity of human eyes is lower. time in order to further refine the set of QP values).
13. Claims 3, 10, 17 are rejected are rejected under 35 U.S.C. 103 as being unpatentable over TIAN et al. ( CN 108540801) and in view of CHENG et al. (CN 104239271) and further in view of DELAMONT (US 20200368616).
Regarding claim 3, the combination of TIAN, CHENG teaches the limitations recited in claim 2 as discussed above. In addition, TIAN further discloses that obtaining an input indicative of an area of focus, wherein dividing the XR video frame is based at least in part on the area of focus (as shown in page 3, dividing the original image into Top (top), syndromes (bottom), Left (left side), Right (right), Left-Top (upper left corner), Right-Top (right), Left-Bottom (left), Right-Bottom (right), Center (central region) nine regions… human eyes to Center (central region) region of the image the highest sensitivity, lowest visual sensitivity to the edge of the image; in which center region is the area of focus).
It is noticed that TIAN does not disclose explicitly of a gaze-tracking user interface.
DELAMONT discloses of a gaze-tracking user interface (paragraph 2125, … an eye tracking based display).
It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that gaze-tracking user interface as a modification to the method for the benefit of that tracking a user’s eye (paragraph 2125).
Regarding claim 10, the combination of TIAN, CHENG teaches the limitations recited in claim 9 as discussed above. In addition, TIAN further discloses that obtaining an input indicative of an area of focus, wherein dividing the XR video frame is based at least in part on the area of focus (as shown in page 3, dividing the original image into Top (top), syndromes (bottom), Left (left side), Right (right), Left-Top (upper left corner), Right-Top (right), Left-Bottom (left), Right-Bottom (right), Center (central region) nine regions… human eyes to Center (central region) region of the image the highest sensitivity, lowest visual sensitivity to the edge of the image; in which center region is the area of focus).
It is noticed that TIAN does not disclose explicitly of a gaze-tracking user interface.
DELAMONT discloses of a gaze-tracking user interface (paragraph 2125, … an eye tracking based display).
It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that gaze-tracking user interface as a modification to the non-transitory computer readable medium for the benefit of that tracking a user’s eye (paragraph 2125).
Regarding claim 17, the combination of TIAN, CHENG teaches the limitations recited in claim 16 as discussed above. In addition, TIAN further discloses that obtaining an input indicative of an area of focus, wherein dividing the XR video frame is based at least in part on the area of focus (as shown in page 3, dividing the original image into Top (top), syndromes (bottom), Left (left side), Right (right), Left-Top (upper left corner), Right-Top (right), Left-Bottom (left), Right-Bottom (right), Center (central region) nine regions… human eyes to Center (central region) region of the image the highest sensitivity, lowest visual sensitivity to the edge of the image; in which center region is the area of focus).
It is noticed that TIAN does not disclose explicitly of a gaze-tracking user interface.
DELAMONT discloses of a gaze-tracking user interface (paragraph 2125, … an eye tracking based display).
It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the technology that gaze-tracking user interface as a modification to the device for the benefit of that tracking a user’s eye (paragraph 2125).
14. Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See form 892.
15. Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZAIHAN JIANG whose telephone number is (571)272-1399. The examiner can normally be reached on flexible.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath Perungavoor can be reached on (571)272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-270-0655.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZAIHAN JIANG/Primary Examiner, Art Unit 2488