Prosecution Insights
Last updated: April 19, 2026
Application No. 18/877,917

Method and apparatus of signaling encapsulated data representing primary video sequence associated with auxiliary video sequence, and method and apparatus of parsing encapsulated data representing primary video sequence associated with auxiliary video sequence

Non-Final OA §103
Filed
Dec 20, 2024
Examiner
PIERORAZIO, MICHAEL
Art Unit
2426
Tech Center
2400 — Computer Networks
Assignee
BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
1y 12m
To Grant
97%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
612 granted / 699 resolved
+29.6% vs TC avg
Moderate +10% lift
Without
With
+9.6%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 12m
Avg Prosecution
18 currently pending
Career history
717
Total Applications
across all art units

Statute-Specific Performance

§101
4.0%
-36.0% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 699 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1, 3–10, 14, and 17–26 have been submitted for examination. Claims 1, 3–5, 14, 17–20, and 26 have been examined and rejected. Claims 6–10 and 21–25 are objected to. Allowable Subject Matter Claims 6–10 and 21–25 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3–5, 14, 17–20, and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (US 2011/0150101) in view of Li et al. (US 2016/0323590). Regarding claims 1, 14, and 17, Liu discloses: A method of signaling encapsulated data representing a primary video sequence associated with an auxiliary video sequence, the primary and auxiliary video sequences resulting from image projections methods applied on signals captured by sensors (“at least one depth image collecting apparatus capable of outputting depth information of the scene and at least one ordinary image collecting apparatus capable of outputting color/grayscale video information of the scene”) , (Liu, ¶ [0098], “The image collecting apparatuses on different viewpoints refer to at least one depth image collecting apparatus capable of outputting depth information of the scene and at least one ordinary image collecting apparatus capable of outputting color/grayscale video information of the scene or refer to at least one depth image collecting apparatus capable of outputting both depth information and color/grayscale video information of the scene. Before the collection of video images, a certain number of depth image collecting apparatuses and ordinary image collecting apparatuses may be set as required. The number of image collecting apparatuses is appropriate so long as the collected video image data of the scene includes at least one depth image and at least two color images. In this step, at the time of collecting images of the scene, all image collecting apparatuses may be controlled to perform synchronous photographing and collection of images, so as to ensure synchronization of the collected video images and prevent a sharp difference between images collected at the same moment on the same viewpoint or different viewpoints.”) the method comprising: - writing visual contents of the primary and auxiliary video sequences as encapsulated data; (“encode the corrected color images and depth images”) and (Liu, ¶ [0112], “an encoding and decoding standard such as MPEG-4 and H.264 may be applied to encode the corrected color images and depth images. The depth may be expressed through the MPEG standard. Currently, many methods are available to encode data of color images and depth images, for example, a 3D video encoding method based on layering. This method combines SEI information in the H.264 protocol with the layered encoding conception, encodes the video data of a channel (such as color image data of the channel) into a basic layer inclusive of only I frames and P frames through a general method, and then encodes the data of another channel (such as depth image data) into P frames. The reference frame applied in the prediction is a previous frame in this channel or the corresponding frame in the basic layer. In this way, high 2D/3D compatibility is achieved in the decoding. For traditional 2D display, it is only necessary to decode the basic layer data; for 3D display, it is necessary to decode all data. In this way, the user can select 2D display or 3D display and control the video decoding module to perform the corresponding decoding.”) Li does not explicitly teach - writing an alignment status as encapsulated data, the alignment status indicating whether the visual contents of the primary and auxiliary video sequences are aligned.”. In a similar field of endeavor Zhu teaches: - writing an alignment status as encapsulated data (“a bitstream for which the POC alignment operation is enabled and/or disabled is determined”), (Li, ¶ [0065], “For example, according to the identification and control information, a bitstream for which the POC alignment operation is enabled and/or disabled is determined, and the POC alignment operation is executed on the bitstream for which the POC alignment operation is enabled and/or disabled during decoding and/or displaying.”) the alignment status indicating whether the visual contents of the primary and auxiliary video sequences are aligned. (“the POC alignment operation is executed on the bitstream for which the POC alignment operation is enabled and/or disabled during decoding and/or displaying”) (Li, ¶ [0065], “after the identification and control information is written into a bitstream, the bitstream may be transmitted, and a receiving side, which may be called a destination device, receives the multi-layer video bitstream, acquires the identification and control information from the multi-layer video bitstream, and executes a decoding operation and/or a displaying operation on the multi-layer video bitstream according to an indication of the identification and control information.”) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system for encoding two or more viewpoints as taught by Liu with the system for writing alignment status as taught by Li, the motivation is to “acquires the identification and control information from the multi-layer video bitstream, and executes a decoding operation and/or a displaying operation on the multi-layer video bitstream according to an indication of the identification and control information” as taught by Li (¶ [0065]). Regarding claims 3 and 18, the combination of Liu and Li teaches: The method of claim 1, wherein the alignment status further indicates one of the following statuses:- there is no alignment between the visual content of the primary video sequence and the visual content of the auxiliary video sequence;- the primary video sequence and the auxiliary video sequence have been produced from a same image projection method but samples of their video frames are not aligned;- there is one-to-one alignment between the samples of video frames of the primary video sequence and the samples of the video frames of the auxiliary video sequence; (“the POC alignment operation is executed on the bitstream for which the POC alignment operation is enabled and/or disabled during decoding and/or displaying”) or - alignment between the visual content of the primary video sequence and the visual content of the auxiliary video sequence is unspecified. (Li, ¶ [0065], “For example, according to the identification and control information, a bitstream for which the POC alignment operation is enabled and/or disabled is determined, and the POC alignment operation is executed on the bitstream for which the POC alignment operation is enabled and/or disabled during decoding and/or displaying.”) Regarding claims 4 and 19, the combination of Liu and Li teaches: The method of claim 3, wherein when the alignment status indicates that the primary video sequence and the auxiliary video sequence have been produced from a same image projection method but samples of their video frames are not aligned, the method further comprises writing an alignment reference data as encapsulated data, the alignment reference data indicating either the samples of video frames of the auxiliary video sequence have been aligned on the samples of video frames of the primary video sequence or inversely. (Li, ¶ [0065], “For example, according to the identification and control information, a bitstream for which the POC alignment operation is enabled and/or disabled is determined, and the POC alignment operation is executed on the bitstream for which the POC alignment operation is enabled and/or disabled during decoding and/or displaying.”) Regarding claims 5 and 20, the combination of Liu and Li teaches: The method of claim 1, wherein in a case that the alignment status indicates that the visual contents of the primary and auxiliary video sequences are aligned, the method further comprises writing an overlap status as encapsulated data the overlap status indicating whether the visual content of the primary and the visual content of the auxiliary video sequences fully overlap or partially overlap. (Li, ¶ [0065], “For example, according to the identification and control information, a bitstream for which the POC alignment operation is enabled and/or disabled is determined, and the POC alignment operation is executed on the bitstream for which the POC alignment operation is enabled and/or disabled during decoding and/or displaying.”) Regarding claim 26, the combination of Liu and Li teaches: An apparatus of parsing encapsulated data representing a primary video sequence associated with an auxiliary video sequence, the primary and auxiliary video sequences resulting from image projections methods applied on signals captured by sensors, the apparatus comprising: a processor; and a memory storing instructions executable by the processor, wherein the processor is configured to perform the method of claim 17. (Li, ¶ [0068], “the function may be provided by a single dedicated processor, a single shared processor, or a plurality of independent processors (some processors therein being probably shared). In addition, the processors shall not be interpreted as specially referring to hardware capable of executing software, and may implicitly include, but is not limited to, Digital Signal Processor (DSP) hardware, a Read-Only Memory (ROM) configured to store software, a Random Access Memory (RAM) and a non-volatile storage device instead.”) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL B PIERORAZIO whose telephone number is (571)270-3679. The examiner can normally be reached on Monday - Thursday, 8am - 5pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached on 5712704195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL B. PIERORAZIO/Primary Examiner, Art Unit 2426
Read full office action

Prosecution Timeline

Dec 20, 2024
Application Filed
Mar 31, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593015
TEMPERATURE CONTROL MODULE AND TEMPERATURE CONTROL METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12593092
DISPLAY DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12593109
IMAGE DISPLAY APPARATUS AND OPERATING METHOD THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12593083
Use of Steganographically-Encoded Time Information as Basis to Establish a Time Offset, to Facilitate Taking Content-Related Action
2y 5m to grant Granted Mar 31, 2026
Patent 12593103
METHODS AND SYSTEMS FOR GENERATING AND PROVIDING PROGRAM GUIDES AND CONTENT
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
97%
With Interview (+9.6%)
1y 12m
Median Time to Grant
Low
PTA Risk
Based on 699 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month