Prosecution Insights
Last updated: April 19, 2026
Application No. 18/960,461

Tile Shuffling for 360 Degree Video Decoding

Non-Final OA §102§103
Filed
Nov 26, 2024
Examiner
BRUMFIELD, SHANIKA M
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
1 (Non-Final)
68%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
82%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
263 granted / 386 resolved
+10.1% vs TC avg
Moderate +14% lift
Without
With
+14.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
411
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
54.2%
+14.2% vs TC avg
§102
21.6%
-18.4% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 386 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 21, 22, 25, 26, 29, 31, 32, 35, 36, and 39 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sanchez et al., “Compressed Domain Video Processing for Tile Based Panoramic Streaming Using HEVC”, 2015 IEEE International Conference on Image Processing (ICIP) (hereinafter Sanchez), as cited by applicant. Regarding claims 21 and 31, Sanchez teaches a method of decoding pictures from a bitstream of coded segments, each picture representing a view of a scene at a respective time, the view of the scene comprising a viewport representing a line of sight of a user to a respective region of the scene, wherein each coded segment represents a respective spatial region of the scene, and a decoding device comprising a memory and a processor coupled to the memory, wherein the processor is configured to cause the decoding device to perform the method, the method comprising: receiving the bitstream (e.g. Figs. 3 – 5, and section 3: depicting and describing that the system receives a bitstream, the bitstream including coded segments [numbered tiles in figures 3 – 5], the pictures of the bitstream representing a view of a scene, the view of the scene including a viewport representing a line of sight of a user [e.g. Fig. 1 and introduction depicting and describing video stream of panoramic images of a scene, the images including regions of interest associated with a user’s line of sight, wherein the region of interest is the equivalent of the viewport], each coded segment representing a spatial region of the scene [e.g. Figs. 2 and 3, and sections 2 and 3.1: depicting and describing that each segment of a picture [depicted as segments 0 – 9] represent a spatial region of the scene [depicting that each segment 0 -9 represents a different spatial portion of the same scene]); generating a first decodable picture by placing one or more coded segments from the bitstream, representing a region of the scene corresponding to a first viewport, in respective spatial positions in a decodable arrangement of coded segments for a first picture (e.g. Figs. 4 and 5, and section 3.1: depicting and describing that the system generates a decodable picture by placing coded segments [depicted as segments 0 – 4] representing a region of the scene corresponding to a first viewport [depicted as region of interest at time t0, wherein the region of interest at time t0 is the equivalent of the first viewport]) ; decoding the first decodable picture by decoding the decodable arrangement of coded segments for the first picture and placing the decoded segments for the first picture at respective spatial positions in an output arrangement of decoded segments for the first picture, according to a mapping of spatial positions in the decodable arrangement to spatial positions in the output arrangement, for use in rendering a respective view of the scene (e.g. section 3.2: describing that the system decodes the image associated with the region of interest at time t0, according to the spatial positions in the output arrangement) ; generating a second decodable picture by placing one or more coded segments from the bitstream, representing a region of the scene corresponding to a second viewport, at respective spatial positions in a decodable arrangement for a second picture (e.g. Figs. 3 – 5 and section 3.2: depicting and describing that the system generates a Generated Reference Picture (GRP), the GRP representing a region of the scene corresponding to a second viewport [see, e.g. Figs. 4 and 5, and section 3.2: depicting and the describing that the GRP contains image data of the region of interest at time t1, wherein image data of the region of interest at time t1 is the equivalent of the second viewport]), wherein: a coded segment representing a region of the scene corresponding to the second viewport that is not represented in the first decodable picture is an intra-coded segment and is placed at a spatial position in the decodable arrangement for the second picture that corresponds to a spatial position in the decodable arrangement for the first picture used to represent a region of the scene that is not represented in the second decodable picture (e.g. Figs. 3 – 5 and section 3.2: depicting and describing that a portion of the GRP includes coded segments [segments 5 – 7] that are not represented in the first decodable picture and placed at spatial positions corresponding coded segments in the first picture that are not included in the second region of interest [region of interest at time t1]); and a coded segment representing a region of the scene corresponding to the second viewport that is included in the first viewports is a temporally-predicted segment and is placed at the same spatial position in the decodable arrangement for the second picture as for a coded segment in the decodable arrangement for the first picture representing the same region in the second viewport (e.g. Figs. 3 – 5 and section 3.2: depicting and describing that a portion of the GRP includes coded segments [segments 3 and 4] that are represented in the first decodable picture and placed at spatial positions corresponding coded segments in the first picture that are included in the second region of interest [region of interest at time t1]); and decoding the second decodable picture by decoding the decodable arrangement of coded segments for the second picture and placing the decoded segments for the second picture at respective spatial positions in an output arrangement of decoded segments for the second picture, according to the mapping, for use in rendering the respective view of the scene (e.g. section 3.2: describing that the GRP is encoded, reasonably suggesting that the GRP is decoded). Turning to claims 22 and 32, Sanchez teaches all of the limitations of claims 21 and 31, respectively, as discussed above. Sanchez further teaches: wherein the bitstream comprises coded segments representing regions of the scene other than those regions included in the first or second viewport (e.g. Figs. 1 and 2, and section 2: depicting and describing that the encoded images include image data representing regions outside of the region of interest of a user [depicted as image data outside of blue bounding regions], wherein image data outside of the regions of interest is the equivalent of the regions of the scene other than those regions included in the first or second viewport); generating the first decodable picture comprises selecting from the bitstream the one or more coded segments to represent the region of the scene corresponding to the first viewport, and placing the selected coded segments for the first viewport at the respective spatial positions in the decodable arrangement of coded segments for the first picture (e.g. Figs. 2 – 5 and sections 2 and 3.2: depicting and describing that the system selects image data corresponding to the identified region of interest and generates decodable pictures corresponding to the identified region of interest, wherein the region of interest is the equivalent of the first viewport); and generating the second decodable picture comprises selecting from the bitstream the one or more coded segments to represent the region of the scene corresponding to the second viewport and placing the selected coded segments for the second viewport at the respective spatial positions in the decodable arrangement of coded segments for the second picture (e.g. Figs. 2 – 5 and sections 2 and 3.2: depicting and describing that the system selects image data corresponding to the identified region of interest and generates decodable pictures corresponding to the identified region of interest, wherein the region of interest is the equivalent of the second viewport). Regarding claims 25 and 35, Sanchez teaches all of the limitations of claims 21 and 31, respectively, as discussed above. Sanchez further teaches: wherein the decodable arrangement for the first picture comprises the same number of coded segments as the decodable arrangement for the second picture (e.g. Figs. 4 and 5, and section 3.2: depicting and describing that the number of coded segments in the first picture are the same as the number of coded segments in the second picture [depicted as both having 5 segments]). Turning to claims 26 and 36, Sanchez teaches all of the limitations of claims 21 and 31, respectively, as discussed above. Sanchez further teaches: wherein the output arrangement for the first picture comprises the same number of decoded segments as the output arrangement for the second picture (e.g. Figs. 4 and 5, and section 3.2: depicting and describing that the number of coded segments in the first picture are the same as the number of coded segments in the second picture [depicted as both having 5 segments]). Regarding claims 29 and 39, Sanchez teaches all of the limitations of claims 21 and 31, respectively, as discussed above. Sanchez further teaches: wherein the view of the scene includes a view of a region of the scene outside of a respective one of said first and second viewport (e.g. Fig. 1 and section 1: depicting and describing that the scene includes a region outside of the first region of interest [t0] and the second region of interest [t1], wherein the first region of interest and the second region of interest is the equivalent of the first viewport and the second viewport); wherein generating the first decodable picture further comprises placing one or more coded segments representing a region of the scene visible to the user outside of the first viewport at respective spatial positions in the decodable arrangement for the first picture (e.g. Figs. 3 – 5 and section 3.2: depicting and describing that the decodable pictures included image data representing portions of the scene outside of the region of interest [depicted as portions of coded segments 0 – 4 that are outside of the blue bounding box, the blue bounding box encompassing the first region of interest, wherein the first region of interest is the equivalent of the first viewport]); and wherein generating the second decodable picture further comprises placing one or more coded segments representing a region of the scene visible to the user outside of the second viewport at respective spatial positions in the decodable arrangement for the second picture (e.g. Figs. 3 – 5 and section 3.2: depicting and describing that the decodable pictures included image data representing portions of the scene outside of the region of interest [depicted as portions of coded segments 3 – 7 that are outside of the blue bounding box, the blue bounding box encompassing the first region of interest, wherein the first region of interest is the equivalent of the first viewport]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 23, 24, 33, and 34 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sanchez et al., “Compressed Domain Video Processing for Tile Based Panoramic Streaming Using HEVC”, 2015 IEEE International Conference on Image Processing (ICIP) (hereinafter Sanchez), as cited by applicant as applied to claims 21 and 31, respectively, above, and further in view of Skupin et al. (US 2018/0098077) (hereinafter Skupin), as cited by applicant. Regarding claims 23 and 33, Sanchez teaches all of the limitations of claims 21 and 31, respectively, as discussed above. Sanchez does not explicitly teach: wherein the mapping further comprises an indication of a transformation to be applied to a respective decoded segment according to a predetermined rendering format. Skupin, however, teaches a method and device for decoding a picture: wherein the mapping further comprises an indication of a transformation to be applied to a respective decoded segment according to a predetermined rendering format (e.g. fig. 2 and par. 64: depicting and describing that the system generates an image [element 40] by mapping coding segments [elements 32a and 32b] into the image frame, the mapping including information indicating a size difference of a coding segment and it’s mapped position in the generated picture, wherein information indicating a size difference is the equivalent of an indication of a transformation to be applied to the respective decoded segment according to a predetermined rendering format). It therefore would have been obvious to one of ordinary skill in the art to modify the teachings of Sanchez by adding the teachings of Skupin in order for the mapping to further comprise an indication of a transformation to be applied to a respective decoded segment according to a predetermined rendering format. One of ordinary skill in the art would have been motivated to make such a modification because the modification improves efficiency in changing composition of a coded version of a video content without penalties in terms of bitrate consumption (Skupin, e.g. par. 11: describing a desire to improve efficiency in changing the composition of a coded version of video content without penalties in terms of bitrate consumption). Turning to claims 24 and 34, Sanchez and Skupin teach all of the limitations of claims 21 and 23, and claims 31 and 33, respectively, as discussed above. Sanchez does not explicitly teach: wherein the transformation comprises one or more of: a change to the height of a region represented by the decoded segment; a change to the width of a region represented by the decoded segment; and a rotation of a region represented by the decoded segment. Skupin, however, teaches a method and device for decoding a picture: wherein the transformation comprises one or more of: a change to the height of a region represented by the decoded segment; a change to the width of a region represented by the decoded segment; and a rotation of a region represented by the decoded segment (e.g. Fig. 2 and par. 64: depicting and describing that the system changes a size of a decoded segment, wherein changing a size of a decoded segment is the equivalent of changing the height of a region represented by the decoded segment and changing the width of the region represented by the decoded segment). It therefore would have been obvious to one of ordinary skill in the art to modify the teachings of Sanchez by adding the teachings of Skupin in order for the transformation to comprise one or more of: a change to the height of a region represented by the decoded segment; a change to the width of a region represented by the decoded segment; and a rotation of a region represented by the decoded segment. One of ordinary skill in the art would have been motivated to make such a modification because the modification improves efficiency in changing composition of a coded version of a video content without penalties in terms of bitrate consumption (Skupin, e.g. par. 11: describing a desire to improve efficiency in changing the composition of a coded version of video content without penalties in terms of bitrate consumption). Claim(s) 27, 28, 37, and 38 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sanchez et al., “Compressed Domain Video Processing for Tile Based Panoramic Streaming Using HEVC”, 2015 IEEE International Conference on Image Processing (ICIP) (hereinafter Sanchez), as cited by applicant as applied to claims 21 and 31, respectively, above, and further in view of Lim et al. (US 2014/0119671) (hereinafter Lim), as cited by applicant. Regarding claims 27 and 37, Sanchez teaches all of the limitations of claims 21 and 31, respectively, as discussed above. Sanchez does not explicitly teach: further comprising decoding from the bitstream mapping data providing an indication of the mapping. Lim, however, teaches a method and device for decoding a picture: further comprising decoding from the bitstream mapping data providing an indication of the mapping (e.g. Fig. 10 and pars. 84 – 86: depicting and describing that the system obtain syntax information indicating position information indicating a mapping of coded tile segments to an output picture). It therefore would have been obvious to one of ordinary skill in the art to modify the teachings of Sanchez by adding the teachings of Lim in order to decode from the bitstream mapping data providing an indication of the mapping. One of ordinary skill in the art would have been motivated to make such a modification because the modification allows for non-equal tile regions to be presented in an output picture (Lim, e.g. Fig. 4 and par. 6: depicting and describing the desire to have an output picture split into 3 non-equal regions). Turning to claims 28 and 38, Sanchez and Lim teach all of the limitations of claims 21 and 27, and claims 31 and 37, respectively, as discussed above. Sanchez does not explicitly teach: wherein the mapping data comprise an output index value associated with each coded segment in the decodable arrangement, the output index value corresponding to an indexed segment position in the output arrangement. Lim, however, teaches a method and device for decoding a picture: wherein the mapping data comprise an output index value associated with each coded segment in the decodable arrangement, the output index value corresponding to an indexed segment position in the output arrangement (e.g. Fig. 10 and pars. 84 – 86: depicting and describing that the system obtain syntax information indicating position information indicating a mapping of coded tile segments to an output picture, wherein the syntax information reasonably suggests the output index value corresponding to an indexed segment position in the output arrangement). It therefore would have been obvious to one of ordinary skill in the art to modify the teachings of Sanchez by adding the teachings of Lim in order for the mapping data to comprise an output index value associated with each coded segment in the decodable arrangement, the output index value corresponding to an indexed segment position in the output arrangement. One of ordinary skill in the art would have been motivated to make such a modification because the modification allows for non-equal tile regions to be presented in an output picture (Lim, e.g. Fig. 4 and par. 6: depicting and describing the desire to have an output picture split into 3 non-equal regions). Claim(s) 30 and 40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sanchez et al., “Compressed Domain Video Processing for Tile Based Panoramic Streaming Using HEVC”, 2015 IEEE International Conference on Image Processing (ICIP) (hereinafter Sanchez), as cited by applicant as applied to claims 21 and 29, and claims 31 and 39, respectively, above, and further in view of Curcio et al. (US 2018/0249163) (hereinafter Curcio), as cited by applicant. Regarding claims 30 and 40, Sanchez teaches all of the limitations of claims 21 and 29, and claims 31 and 39, respectively, as discussed above. Sanchez further teaches: wherein the one or more coded segments representing the viewports comprises coded segments encoded at a high quality level (e.g. section 1: describing that the identified region of interest with the panoramic video are video segments at a high spatial resolution, wherein the identified region of interest is the equivalent of the viewports, and wherein high spatial resolution is the equivalent of a high quality level). Sanchez does not explicitly teach: wherein the one or more coded segments representing the viewports comprise coded segments encoded at a first quality level and the one or more coded segments representing the regions of the scene visible to the user outside of the viewports comprise coded segments encoded at a second, lower quality level. Curcio, however, teaches: wherein the one or more coded segments representing the viewports comprise coded segments encoded at a first quality level and the one or more coded segments representing the regions of the scene visible to the user outside of the viewports comprise coded segments encoded at a second, lower quality level (e.g. par. 2: describing that coded segments representing the viewports included coded segments encoded with a higher visual quality and coded segments representing non-viewport regions include coded segments encoded with a lower visual quality). It therefore would have been obvious to modify the teachings of Sanchez by adding the teachings of Curcio in order for the one or more coded segments representing the viewports to comprise coded segments encoded at a first quality level and the one or more coded segments representing the regions of the scene visible to the user outside of the viewports to comprise coded segments encoded at a second, lower quality level. One of ordinary skill in the art would have been motivated to make such a modification because the modification improves visual quality of viewport-based video streaming (Curcio, e.g. par. 4: describing a desire to improve visual quality of viewport-based video streaming). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHANIKA M BRUMFIELD whose telephone number is (571)270-3700. The examiner can normally be reached M-F 8:30 - 5 PM AWS. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at 571-272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. SHANIKA M. BRUMFIELD Examiner Art Unit 2487 /SHANIKA M BRUMFIELD/Examiner, Art Unit 2487 /TSION B OWENS/Primary Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

Nov 26, 2024
Application Filed
Apr 29, 2025
Response after Non-Final Action
Dec 23, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598369
SURFACE TOPOGRAPHY MEASUREMENT SYSTEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12591125
Microscopy System and Method for Checking a Rotational Position of a Microscope Camera
2y 5m to grant Granted Mar 31, 2026
Patent 12587642
ENCODING METHOD, DECODING METHOD, CODE STREAM, ENCODER, DECODER AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12581070
EDGE OFFSET FOR CROSS COMPONENT SAMPLE ADAPTIVE OFFSET (CCSAO) FILTER
2y 5m to grant Granted Mar 17, 2026
Patent 12581090
QUANTIZATION PARAMETER FOR CHROMA DEBLOCKING FILTERING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
68%
Grant Probability
82%
With Interview (+14.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 386 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month