Prosecution Insights
Last updated: April 19, 2026
Application No. 18/115,248

METHOD AND APPARATUS FOR IMMERSIVE VIDEO ENCODING AND DECODING, AND METHOD FOR TRANSMITTING A BITSTREAM GENERATED BY THE IMMERSIVE VIDEO ENCODING METHOD

Non-Final OA §101§103§112
Filed
Feb 28, 2023
Examiner
TRUONG, LAWRENCE QUANG
Art Unit
2434
Tech Center
2400 — Computer Networks
Assignee
Research & Business Foundation Sungkyunkwan University
OA Round
3 (Non-Final)
100%
Grant Probability
Favorable
3-4
OA Rounds
2y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
12 granted / 12 resolved
+42.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 2m
Avg Prosecution
20 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
13.1%
-26.9% vs TC avg
§103
48.3%
+8.3% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
24.4%
-15.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Claims 4, 5, 8, and 9 are canceled. Claims 1-3, 6, 7, 10, and 11 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/17/2025 has been entered. Response to Arguments Applicant’s arguments filed 12/17/2025 have been fully considered but they are moot in view of new grounds of rejection. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1-3, 6, 7, 10, and 11 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claims 1, 10, and 11, the claims recite “transmitting the selected bitstream”. The limitation “the selected bitstream” has insufficient antecedent basis. For the purpose of examination, the claim limitation will be interpreted as “transmitting the selected candidate bitstream.” Claims 2, 3, 6, and 7 inherit this rejection. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 6, 7, 10, and 11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract ideas without significantly more. The claim(s) recite(s) Claims 1, 10, and 11 recites grouping images, calculating, determining, and selecting. Claim 2 recites grouping images. Claim 3 recites a definition for grouping images. Claim 6 recites mathematical calculations. Claim 7 recites determining a bitstream level. This judicial exception is not integrated into a practical application because the additional steps of grouping images (i.e. generating and manipulating data) does not amount to significantly more. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because “transmitting the selected bitstream” may appear to be additional generation of data. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-3, 6, 7, 10, and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210329209 A1 to Lee et al. (Lee) in view of US 20190238860 A1 to Lim et al. (Lim) Regarding claim 1, Lee teaches a method for encoding an immersive image, which is implemented in an apparatus for encoding an immersive image, the method comprising: grouping images for a virtual reality space into groups (Lee [0106], e.g., Grouping of view images may be performed based on spatial continuity between the view images or spatial proximity between the view images. Alternatively, view images corresponding to an arbitrary spatial region may be set as a group). Lee does not explicitly teach, but Lim teaches calculating, based on view information, a view weight of each of the groups (Lim [0110], e.g., The degree of importance may be determined from the above-described high-importance position information, and the importance degree may be set to be higher as the area resides closer to the user's viewpoint position in one input video source); determining, based on the view weight, a bitstream level of each of the groups (Lim [0182], e.g., Specifically, FIG. 11 from (a) to (d) show the structure of a mixed video stream generated on the basis of the area of importance set in consideration of the above-mentioned feature of the 360° video. The encoding rates of the whole video by the tile structures shown in FIG. 11 from (a) to (d) are 12 Mbps, 11 Mbps, 10 Mbps and 8 Mbps, respectively); selecting, among candidate bitstreams, a candidate bitstream corresponding to the determined bitstream level (Lim [0228], e.g., Step S2230 selects, using the encoded bitstreams and the high-importance position information obtained by Step S2210 and Step S2220, a mixed video stream that matches the high-importance position information among the plurality of encoded bitstreams, based on the high-importance position information); and transmitting the selected bitstream (Lim [0229], e.g., transmitting the selected mixed video stream to the user terminal apparatus 150), wherein the candidate bitstreams are generated by encoding the groups in different levels (Lim [0182], e.g., Specifically, FIG. 11 from (a) to (d) show the structure of a mixed video stream generated on the basis of the area of importance set in consideration of the above-mentioned feature of the 360° video. The encoding rates of the whole video by the tile structures shown in FIG. 11 from (a) to (d) are 12 Mbps, 11 Mbps, 10 Mbps and 8 Mbps, respectively; Also see [0128], [0111]), wherein the view information includes first view information, which is view information of the images (Lim [0212], e.g., classify a plurality of areas based on their distance from the user's viewpoint position candidates; Note that first view information = plurality of areas, also the distance of the plurality of areas imply view information of the images), and second view information, which is information on a view of a viewer (Lim [0230], e.g., The high-importance position information may include at least one of the user's viewpoint position of the input video source), and wherein the view weight of each of the groups is calculated based on a distance between the first view information and the second view information (Lim [0212], e.g., More specifically, Step S2120 may obtain user's viewpoint position candidates of the input video source, and set one or more areas included within a certain range from the user's viewpoint position candidates as areas of importance, and classify a plurality of areas based on their distance from the user's viewpoint position candidates). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to have modified the teachings of Lee with the teachings of Lim with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make the modification for the benefit of reducing computational loads and minimizing buffering (Lim [0012], e.g., According to some embodiments of the present disclosure, ultra-high resolution, such as 4K ultra high-definition (4K UHD) and 8K UHD, video contents are transmitted based on the viewpoint of the user of a (for example, VR) user terminal apparatus with differentiated bit rates applied between the user's viewing area (within the field of view) and the user's non-viewing area (outside the visual field), and thereby saves the amount of data for the video area to be reproduced as the background, resulting in minimized buffering effect). Regarding claim 2, most of the limitations of this claim have been noted in the rejection of claim 1. Lee further teaches wherein the images are grouped into an image group of base view and an image group of additional view (Lee [0050], e.g., The view optimizer 110 classifies view images into a basic image and an additional image. The basic image indicates a view image with highest pruning priority, which is not pruned, and the additional image indicates a view image with lower pruning priority than the basic image). Regarding claim 3, most of the limitations of this claim have been noted in the rejection of claim 1. Lee further teaches wherein the grouping comprises: generating patches for the images by removing an overlapping region between the images in each group (Lee [0065], e.g., Through the pruning process, overlapping data between the additional image and the reference image may be removed. Overlapping data detected from the additional image may be removed. As a result of performing pruning, a pruning mask that displays a non-overlapped region between the additional image and the reference image may be generated); generating atlases for the images by packing the patches (Lee [0072], e.g., The packing unit 126 may pack each of grouped patches on a rectangular image. During packing, modification such as size change, rotation or flipping of the patch may be involved. An image packed with patches may be defined as an atlas); and grouping the atlases into the groups (Lee [0248], e.g., Meanwhile, in the decoder, atlases of a plurality of groups may be packed into one atlas to enable divisional decoding. For example, as shown in FIG. 21, an atlas of groups and an atlas of group2 may be repacked into one image, thereby generating Atlas1. In FIG. 21, Atlas1_1 indicates the atlas of group1 and Atlas1_2 indicates the atlas of group2). Regarding claim 6, most of the limitations of this claim have been noted in the rejection of claim 1. Lee does not explicitly teach, but Lim teaches wherein the view weight of each of the groups is calculated to be a larger value as the distance between the first view information and the second view information become smaller (Lim [0211], e.g., the closer the position of an area to the user's viewpoint position (or the object position in the input video source), the higher the importance of the area). The motivation to combine is the same as that of claim 1 above. Regarding claim 7, most of the limitations of this claim have been noted in the rejection of claim 1. Lee does not explicitly teach, but Lim teaches wherein the bitstream level of each of the groups is determined as a higher bitstream level as a value of the view weight becomes larger (Lim [0211], e.g., Step S2120 sets an area of importance having the highest importance level from the input video source, classifies the input video source into a plurality of areas according to the degree of importance. Here, an area of importance means an area to be extracted from encoded data having been encoded at the highest bit rate). The motivation to combine is the same as that of claim 1 above. Regarding claim 10, Lee teaches an apparatus for encoding an immersive image, the apparatus comprising: a memory (Lee [0263], e.g., hardware devices, such as read-only memory (ROM), random-access memory (RAM), flash memory, etc., which are particularly structured to store and implement the program instruction); and at least one processor (Lee [0263], e.g., The embodiments of the present disclosure may be implemented in a form of program instructions, which are executable by various computer components, and recorded in a computer-readable recording medium). The rest of the claim recites an apparatus of the method of claim 1, and is similarly analyzed. Regarding claim 11, Lee teaches a method for transmitting a bitstream generated by an immersive image encoding method, wherein the immersive image encoding method comprises: grouping images for a virtual reality space into groups (Lee [0106], e.g., Grouping of view images may be performed based on spatial continuity between the view images or spatial proximity between the view images. Alternatively, view images corresponding to an arbitrary spatial region may be set as a group). Lee does not explicitly teach, but Lim teaches calculating, based on view information, a view weight of each of the groups (Lim [0110], e.g., The degree of importance may be determined from the above-described high-importance position information, and the importance degree may be set to be higher as the area resides closer to the user's viewpoint position in one input video source); determining, based on the view weight, a bitstream level of each of the groups (Lim [0182], e.g., Specifically, FIG. 11 from (a) to (d) show the structure of a mixed video stream generated on the basis of the area of importance set in consideration of the above-mentioned feature of the 360° video. The encoding rates of the whole video by the tile structures shown in FIG. 11 from (a) to (d) are 12 Mbps, 11 Mbps, 10 Mbps and 8 Mbps, respectively); selecting, among candidate bitstreams. a candidate bitstream corresponding to the determined bitstream level (Lim [0228], e.g., Step S2230 selects, using the encoded bitstreams and the high-importance position information obtained by Step S2210 and Step S2220, a mixed video stream that matches the high-importance position information among the plurality of encoded bitstreams, based on the high-importance position information); and transmitting the selected bitstream (Lim [0229], e.g., transmitting the selected mixed video stream to the user terminal apparatus 150), wherein the candidate bitstreams are generated by encoding the groups in different levels (Lim [0182], e.g., Specifically, FIG. 11 from (a) to (d) show the structure of a mixed video stream generated on the basis of the area of importance set in consideration of the above-mentioned feature of the 360° video. The encoding rates of the whole video by the tile structures shown in FIG. 11 from (a) to (d) are 12 Mbps, 11 Mbps, 10 Mbps and 8 Mbps, respectively; Also see [0128], [0111]), wherein the view information includes first view information, which is view information of the images (Lim [0212], e.g., classify a plurality of areas based on their distance from the user's viewpoint position candidates; Note that first view information = plurality of areas, also the distance of the plurality of areas imply view information of the images), and second view information which is information on a view of a viewer (Lim [0230], e.g., The high-importance position information may include at least one of the user's viewpoint position of the input video source), and wherein the view weight of each of the groups is calculated based on a distance between the first view information and the second view information (Lim [0212], e.g., More specifically, Step S2120 may obtain user's viewpoint position candidates of the input video source, and set one or more areas included within a certain range from the user's viewpoint position candidates as areas of importance, and classify a plurality of areas based on their distance from the user's viewpoint position candidates). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to have modified the teachings of Lee with the teachings of Lim with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make the modification for the benefit of reducing computational loads and minimizing buffering (Lim [0012], e.g., According to some embodiments of the present disclosure, ultra-high resolution, such as 4K ultra high-definition (4K UHD) and 8K UHD, video contents are transmitted based on the viewpoint of the user of a (for example, VR) user terminal apparatus with differentiated bit rates applied between the user's viewing area (within the field of view) and the user's non-viewing area (outside the visual field), and thereby saves the amount of data for the video area to be reproduced as the background, resulting in minimized buffering effect). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20140198838 A1 to Andrysco et al. discloses classifying video frames into primary and object regions and background object regions. Based on the classification of video frames, the area of interest, which constitutes the primary object region, is encoded with higher quality and the rest of the video frame with lower quality, during low bandwidth scenarios. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAWRENCE TRUONG whose telephone number is (571)272-6973. The examiner can normally be reached Monday - Friday, 8:00 am - 4 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ali Shayanfar can be reached at (571) 270-1050. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LAWRENCE TRUONG/Examiner, Art Unit 2434 /NOURA ZOUBAIR/Primary Examiner, Art Unit 2434
Read full office action

Prosecution Timeline

Feb 28, 2023
Application Filed
May 09, 2025
Non-Final Rejection — §101, §103, §112
Jul 18, 2025
Response Filed
Sep 11, 2025
Final Rejection — §101, §103, §112
Oct 21, 2025
Response after Non-Final Action
Dec 17, 2025
Request for Continued Examination
Dec 21, 2025
Response after Non-Final Action
Mar 05, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591375
DATA STORAGE DEVICE AND METHOD OF ACCESS IN CONFIDENTIAL MODE AND NORMAL MODE
2y 5m to grant Granted Mar 31, 2026
Patent 12585751
MULTI-MODAL GESTURE SEQUENCE PASSCODE UNLOCKING APPARATUS FOR A HEAD-MOUNTED DISPLAY
2y 5m to grant Granted Mar 24, 2026
Patent 12566721
SYSTEM SEMICONDUCTOR WITH MULTI PROJECT CHIP FOR PROTECTING INTELLECTUAL PROPERTY RIGHT OF THE SYSTEM SEMICONDUCTOR AND THE METHOD THEREOF
2y 5m to grant Granted Mar 03, 2026
Patent 12554818
SYSTEM, SERVER APPARATUS, AUTHENTICATION METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12548393
SYSTEM, GATE DEVICE, CONTROL METHOD FOR GATE DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+0.0%)
2y 2m
Median Time to Grant
High
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month