Prosecution Insights
Last updated: April 18, 2026
Application No. 17/628,532

SYSTEM AND METHOD FOR ADAPTIVE LENSLET LIGHT FIELD TRANSMISSION AND RENDERING

Non-Final OA §103§112
Filed
Jan 19, 2022
Examiner
ITSKOVICH, MIKHAIL
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
Interdigital Vc Holdings Inc.
OA Round
9 (Non-Final)
35%
Grant Probability
At Risk
9-10
OA Rounds
4y 0m
To Grant
59%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
206 granted / 585 resolved
-22.8% vs TC avg
Strong +24% interview lift
Without
With
+23.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
62 currently pending
Career history
647
Total Applications
across all art units

Statute-Specific Performance

§101
11.5%
-28.5% vs TC avg
§103
53.5%
+13.5% vs TC avg
§102
12.3%
-27.7% vs TC avg
§112
20.4%
-19.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 585 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/17/2026 has been entered. Response to Arguments Applicant's arguments filed on 03/17/2026 have been fully considered but they are not persuasive. In general, Examiner notes that Applicant appears to rely on a subjective presumption that the claimed lenslet sub-views are special and different from the prior art, however the presumed details are not made explicit in the claim limitations or the submitted arguments. Specification describes claim embodiments that are substantively similar to the cited prior art. Examiner notes that there are alternative embodiments in Specification that describe features in much more precise language than the language used in the claims, for example in Paragraphs 140-143. Examiner suggests identifying and claiming specific steps that Applicant believes to be responsible for an intended improvement. Regarding the newly amended language, Applicant argues: “To traverse the rejection and to more clearly define the invention, Applicant has amended claim 43 and 44. Specifically, representative claim 43 now recites "interpolating the remaining lenslet sub-views in the full array of lenslet sub-views from the mapped lenslet sub-views by reconstructing lenslet sub-views that were omitted from the retrieved lenslet representation, thereby generating a complete array of lenslet sub-views from a sparsely sampled subset." This limitation is not taught or suggested by the cited prior art combination.” Examiner notes that the amendments do not limit the claims to performing a particular step, but rather describe intended results of interpolation. As noted in the updated reasons for rejection below, interpolation inherently reconstructs omitted information. Also note an issue with antecedent basis under section 112 below. Applicant argues: “The "interpolation" referenced by the Examiner in D'Acunto (Col. 18, ln. 29-33) relates to Scalable Video Coding (SVC), a technique for enhancing the quality of a base layer, which is fundamentally different from generating entirely new views that were never transmitted.” Examiner notes that Applicant appears to rely on limitations that were not recited in the claims. Where prior art recites claimed features combined with additional features, omission of the additional features in the claim does not distinguish it over the prior art reference. M.P.E.P. 2144.04(II)(A), Ex parte Wu, 10 USPQ 2031 (Bd. Pat. App. & Inter. 1989); See also In re Larson, 340 F.2d 965, 144 USPQ 347 (CCPA 1965). Applicant argues: “Thudor, on the other hand, is directed to encoding a 3D point cloud, not adaptive streaming. While Thudor teaches an "up-sampling process" (Col. 29, ln. 23-25), this process is for filling holes in a 3D point cloud after decoding to improve its spatial integrity. This is a 3D spatial filling operation. The claimed invention, in contrast, performs interpolation to generate missing 2D subviews within a 2D array of views.” Examiner notes that the present claims are directed to representing light field video content and Thudor is directed to representing light field video content in substantively the same context and manner. Both Specification and Thudor refer to using 3D point clouds as a format for light field data as well. The fact that the inventor has recognized another advantage which would flow naturally from following the suggestion of the prior art cannot be the basis for patentability when the differences would otherwise be obvious. See Ex parte Obiaya, 227 USPQ 58, 60 (Bd. Pat. App. & Inter. 1985). Applicant argues: “2. The Claimed Data Structure and Preparation Workflow are Novel. The Examiner's analogy between D'Acunto's tile and the claimed packed dense array of lenslet sub-views is technically inaccurate. The data preparation methods are fundamentally different. D'Acunto teaches creating complete, self-contained video portions (tiles), which may simply be at a lower resolution (Col. 4, ln. 16-28). The present invention, in contrast, teaches a novel workflow: starting with a full array of sub-views, sampling a sparse subset from it, and packing this subset into a new, smaller dense array for transmission. Specification, [0 143].” Examiner notes that the claims do not limit sub-views or dense arrays to a particular format, and Specification clearly indicates using (tiled-based streaming) formats cited in the prior art to encode its data in a video coding standard. See Specification, Paragraph 173. The claims map data (lenslet representations) to a 2D video under an industry standard. D'Acunto teaches this methodology. Applicant argues: “Consequently, the mapping metadata serves a different purpose. D'Acunto's mapping information relates a 2D tile's position to its location on a 3D sphere. The claimed mapping metadata, however, provides instructions for reversing the novel sampling-and-packing process by relating the locations of the sub-views in the packed dense array back to their original locations in the full array.” Examiner notes that the claims are not limited to the steps of “the novel sampling-and-packing process.” If that is the process on which Applicant relies for patentability then the details of this process should be claimed. Applicant argues: “The references, alone or in combination, do not teach or suggest this specific data handling pipeline. There is no motivation in the prior art for a POSITA to modify D'Acunto's tile-stitching system to incorporate this more complex and specific workflow for handling lenslet data.” Examiner notes that the claimed “workflow” is not more complicated, it is broad and it broadly covers tile-based streaming as in D'Acunto and as described in the Specification. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2-9, 43-57 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 2-9, 43-57 recite the limitation "lenslet sub-views that were omitted from the retrieved lenslet representation" in independent claims 43-44. There is insufficient antecedent basis for this limitation in the claim, since the claim does require the step of omitting or otherwise define this feature before referring to it. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 2-9, 43-57 are rejected under 35 U.S.C. 103 as being unpatentable over US 11284124 D’Acunto (“D’Acunto”) in view of US 11367247 to Thudor (“Thudor”) and in view of prior art cited in the Specification (“AAPA”). Regarding Claim 43: “A method comprising: receiving, from a server, (“one or more video servers, configured for storing tiled omnidirectional video data 102 on the basis of a predetermined data format … and the spatial relation between the different subregions (tiles) of different tile streams are stored in a so-called spatial manifest file 106.” D’Acunto, Column 9, lines 38-53. Note similarly in AAPA, Specification, Paragraph 122.) media manifest file describing regions of [light field] video content, (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, describing regions of video content includes “tiled-based streaming.” See Specification, Paragraph 173. “receiving, preferably by the client device, a manifest file, the manifest file comprising a plurality of tile stream identifiers for identifying a plurality of tile streams, the tile streams comprising video frames having image views, whereby the image views of video frames of different tile streams cover different regions of a 2D projection of the omnidirectional video” D’Acunto, Column 4, lines 17-19. Note that the received video data and manifest file can be stored on a server. D’Acunto, Column 9, lines 38-53. Note similarly in AAPA, Specification, Paragraph 122. Note that light field video content described by lenslet representations is an example of multi view video content, see below.) each region is described by [lenslet] representations and corresponding mapping metadata, wherein for each region: (“The manifest file may comprise metadata for a client device in the video processing device that enables the video processing device to request tile streams [view representations] and to process the video data in the tile streams.” D’Acunto, Column 13, lines 46-50 and Column 5, lines 1-16. Note that light field video content described by lenslet representations is an example of multi view video content, see below.) mapping metadata of a corresponding [lenslet] representation relate locations of [lenslet] sub-views in a dense array to their location in the array of [lenslet] sub-views; and (“receiving mapping information [metadata], preferably at least part of the mapping information being signaled to the client device in the manifest file, the mapping information providing the client device with information for enabling the client device to map the 2D projected video data of the tile streams [dense array] as onmidirectional video data [array] onto the curved surface; processing the video data of said received tile streams on the basis of the spatial relation information and the mapping information.” D’Acunto, Column 5, lines 1-16 and Column 26, lines 47-50. See application specific to views representative of lenslet data below.) for each region of the light field video content, … selecting one of lenslet representations of the region, (“A client device in a video processing device may subsequently use the manifest file to select different video tiles on the basis of the central and peripheral FOV,” thus selecting content representation for each region. D’Acunto, Column 5, lines 52-55. See application specific to lenslet representation below.) retrieving the selected lenslet representation from the server, (“flexibility to download [from the server] different spatial subparts [regions] at different qualities and provide the user with a high quality experience while minimizing bandwidth usage.” D’Acunto, Column 5, lines 42-45. See application specific to lenslet representation below.) mapping, based on corresponding mapping metadata, lenslet sub-views of the retrieved lenslet representation from their locations in a dense array into their respective locations in the array of lenslet subviews representing the region, (“receiving mapping information [metadata], preferably at least part of the mapping information being signaled to the client device in the manifest file, the mapping information providing the client device with information for enabling the client device to map the 2D projected video data of the tile streams as onmidirectional video data onto the curved surface; processing the video data of said received tile streams on the basis of the spatial relation information and the mapping information.” D’Acunto, Column 5, lines 1-27. Further, “the streaming client obtains the flexibility to download [select] different spatial subparts at different qualities” thus a selected sub-view (having different quality or resolution) can be mapped to the location of the original sub-view. D’Acunto, Column 5, lines 39-45. See application to lenslet representation below.) interpolating the remaining lenslet sub-views in the array of lenslet sub-views from the mapped lenslet sub-views (See interpolation of base views based on scalable video coding (SVC) in D’Acunto, Column 18, lines 29-33. Cumulatively, it was known in the context of lenslet representation and thus obvious in the art that some form of “an up-sampling process may be applied to the areas of the scene having a point density below the determined level/value to increase the number of points.” Thudor, Column 29, lines 23-25. See statement of motivation below.) by reconstructing lenslet sub-views that were omitted from the retrieved lenslet representation, thereby generating a complete array of lenslet sub-views from a sparsely sampled subset, and (First, note that interpolation by definition reconstructs samples that are omitted from the input thereby generating a more complete set of samples. D’Acunto performs upsampling based on known information in Column 18, lines 29-33 and Thudor performs upsampling without requiring the information to be known in Column 29, lines 23-25. Since prior art performs the claimed function, prior art provides the benefits derived from performing the claimed function.) displaying the array of lenslet sub-views.” (“to render the omnidirectional video data for display.” D’Acunto, Column 9, lines 61-65, and Column 4, lines 44-50. See similar embodiments in application to lenslet representation in Thudor, Column 29, lines 15-26 and statement of motivation below.) D’Acunto teaches the above features in the context of multi-view video streaming and rendering in general, but it does not specify an application to a specific type of multi-view content such as “light field video content” or the manner in which that content would have been captured before use by the claims such as embodied by “lenslet representations.” Thudor teaches the above claim features in the context of streaming and rendering a multi-view video content: “The lightfield data (forming a so-called lightfield image) obtained with such a camera array 2A or 2B corresponds to the plurality of views of the scene” with views corresponding to the “the lenslet array and the photosensor array.” Thudor, Column 10, lines 23-31. “The different views obtained with the lightfield acquisition device enable to obtain an immersive content … Naturally, the immersive content may be obtained with an acquisition device different from a lightfield acquisition device, for example with a camera associated with a depth sensor (e.g. an infra-red emitter/receiver such as the Kinect of Microsoft or with a laser emitter).” Thudor, Column 10, lines 35-43. AAPA similarly indicates as known, that industry standards have included “multi-view image array-type light field formats, such that the light field data consists of a number of views” and that “existing multi-view coding methods (e.g., MPEG HEVC or 3D HEVC) may be used for the compression of light fields.” AAPA, Specification, Paragraphs 108-110. Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings of D’Acunto to treat lenslet representations and light field video content as multi-view content as taught in Thudor (and indicated as known in AAPA), so that “existing multi-view coding methods (e.g., MPEG HEVC or 3D HEVC) may be used for the compression of light fields.” AAPA, Specification, Paragraph 108 and similarly, that other multi-view video formats can be substituted for light-field format in Thudor, Column 10, lines 35-43. Finally, in reviewing the present application, there does not seem to be objective evidence that the claim limitations are particularly directed to: addressing a particular problem which was recognized but unsolved in the art, producing unexpected results at the level of the ordinary skill in the art, or any other objective indicators of non-obviousness. Further, “each region is described by lenslet representations and corresponding mapping metadata, wherein for each region: (Prior art teaches various embodiments of this: In the above context, the portions / regions of light field video content can be mapped to 2D views, “for example to a 2D pixel representation of the 3D representation or of a part of the 3D representation of the scene. A depth map (also called height map) and a texture map ( also called color map) … using the one or more parameters [metadata] describing the 2D parameterization associated with each part.” As noted in Thudor, Column 8, lines 20-26, 40-45, and Figs. 2, 5, 6. Also note that it is known that “By producing different quality layers for the onmidirectional content, and by dividing each quality layer in spatial subparts (sub-regions / tiles), the streaming client obtains the flexibility to download [select] different spatial subparts at different qualities” D’Acunto, Column 5, lines 39-45. Thus, the content of each tile [sub-view] can be selected at a different layer or resolution [representation]. See application of lenslet images to 3D and 2D view representations, and statement of motivation above.) each lenslet representation contains lenslet sub-views provided at a respective lenslet density representing an angular resolution, … the lenslet sub-views are sampled using a respective sampling density, from a full array of lenslet sub-views representing the region, (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, (a) a lenslet representation of a region in an image is a collection [array] of sub-views, (b) the number of sub-views in the collection would be determined based on a sampling density which is not limited by this claim, and (c) representing angular resolution indicates that lenslet representations describe light fields. See Specification, Paragraphs 3, 109, 107. The “sampling” of this claim feature appears to be different than “sub-sampling” of a lenslet representation directed to reducing the content size of the lenslet representation in Specification, Paragraphs 8-10, and 140-146. The claim language appears to describe a standard property of a lenslet represented video: The lightfield data (forming a so-called lightfield image) obtained with such a camera array 2A or 2B corresponds to the plurality of views of the scene” with views corresponding to the “the lenslet array and the photosensor array.” Thudor, Column 10, lines 23-31. “The different views obtained with the lightfield acquisition device enable to obtain an immersive content …” Thudor, Column 10, lines 35-43. Thus each region of the video has data from multiple “sub-lenslet” representations. AAPA similarly indicates as known, that industry standards have included “multi-view image array-type light field formats, such that the light field data consists of a number of views” and that “existing multi-view coding methods (e.g., MPEG HEVC or 3D HEVC) may be used for the compression of light fields.” AAPA, Specification, Paragraphs 108-110.) [contains lenslet sub-views that are sampled from a full array of lenslet sub-views representing the region,] and the sampled lenslet sub-views are packed into a dense array of lenslet sub-views, (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, the dense array of lenslet sub-views appears to be directed to representing the image using tiles of different layers / resolutions in different parts of the image. Prior art teaches this embodiment: “image views of video frames of different tile streams cover different regions of a 2D projection … selecting, preferably by the client device, on the basis of spatial relation information in the manifest file and on the basis of a viewpoint of a user of the client device a first tile streams associated with a first resolution and a first tile position and a second tile stream associated with a second resolution and a second tile position, the second resolution being lower than the first resolution,” DAcunto, Column 4, lines 16-27. See relation of lenslet images to 3D and 2D views / representations, and statement of motivation above.) Claim 44, “An apparatus,” is rejected for reasons stated for Claim 43, and because prior art teaches: “at least one processor; and memory storing instructions that, when executed by the at least one processor, cause the apparatus to:” (“The present disclosure also relates to a (non-transitory) processor readable medium having stored therein instructions for causing a processor to perform at least the abovementioned method of encoding or decoding data representative of a 3D representation of a scene.” Thudor, Column 6, lines 58-63. See statement of motivation in Claim 43.) Regarding Claim 2: “The method of claim 43, further comprising estimating a bandwidth available for streaming the light field video content, (The dimensions, size, and quality of the streamed video portions corresponding to views or lenslets are “determined or selected on the basis of the human FOV and bandwidth considerations” D’Acunto, Column 5, lines 46-52 “In practice the available bandwidth will be a trade-off between efficiency and user experience” D’Acunto, Column 1, lines 39-40 and Column 17, lines 7-10. and specific selections corresponding to available bandwidth in Table 1.) wherein the selecting of the one of the lenslet representations is based on the estimated bandwidth.” (“on the basis of the human FOV and bandwidth considerations. … A client device in a video processing device may subsequently use the manifest file to select different video tiles on the basis of the central and peripheral FOV.” D’Acunto, Column 17, lines 7-17 and specific selections corresponding to bandwidth in Table 1. See application of video tiles to lenslet representations in Claim 1.) Regarding Claim 3: “The method of claim 43, wherein selecting the wherein the selecting of the one of the lenslet representation based on a display capability of a viewing client.” (The dimensions, size, and quality of the streamed video portions are “determined or selected on the basis of the human FOV and bandwidth considerations” where FOV is a display capability. D’Acunto, Column 5, lines 46-52. “A HMD is further characterized by a field of view (FOV), i.e. an area of the omnidirectional video the HMD is able to display for a particular viewpoint and a given moment in time. The FOV of an HMD may be expressed on the basis of a spherical coordinate system.” D’Acunto, Column 10 lines 4-9.) Regarding Claim 4: “The method of claim 43, further comprising predicting a viewpoint of user viewing the light field video content, wherein the selecting of the one of the lenslet representations is based on the predicted viewpoint of the user.” (“adapted to receive information on the viewpoint of the user of the video processing device. The viewpoint may be continuously updated by a viewpoint engine 1136 and provided to the client device.” D’Acunto, Column 26, lines 57-60.) Regarding Claim 5: “The method of claim 43, wherein the selecting of the one of the lenslet representation comprises: … determining respective minimum supported bandwidth for the lenslet representations; and … selecting the lenslet representation with a largest minimum supported bandwidth of the determined respective minimum supported bandwidths that is less than an estimated bandwidth available for streaming the light field video content.” (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, measure of a respective (relative) bandwidth for a video portion is proportional or substantively identical to the measure of the respective size and quality of the video portion. See D’Aquinto, Column 5, lines 46-50, Table 1, which correspond to the Original, Claim 6. Prior art teaches: (1) “By producing different quality layers for the onmidirectional content, and by dividing each quality layer in spatial subparts (tiles), the streaming client obtains the flexibility to download different spatial subparts at different qualities and provide the user with a high quality experience while minimizing bandwidth usage.” and (2) “The dimensions of the central and peripheral may be determined or selected on the basis of the human FOV and bandwidth considerations … the central part of the FOV may need the highest quality, while the peripheral parts of the FOV may be accommodated with lower quality … use the manifest file to select different video tiles on the basis” See D’Aquinto, Column 5, lines 35-55. Thus, Prior Art selects the highest quality (largest minimum bandwidth) video portion in the desired field of view that complies with the target bandwidth considerations (estimates).) Regarding Claim 6: “The method of claim 3, further comprising: … estimating maximum content size supported by an estimated bandwidth available for streaming the light field video content, … wherein the selecting of the one of the lenslet representation comprises selecting one of the lenslet representations with a content size that is less than the estimated maximum content size.” (See reasons for rejection in Claim 5. In particular, prior art notes: “The dimensions [representing the content size] of the central and peripheral may be determined or selected on the basis of the human FOV and bandwidth considerations …” See D’Aquinto, Column 5, lines 35-55.) Regarding Claim 7: “The method of claim 1, further comprising: tracking a direction of gaze of a user viewing the light field video content, (“the viewpoint [gaze direction] may be associated with a field of view of the user” D’Acunto, Column 5, lines 28-30. For example, “the gaze or the pose of the user is determined with a tracking system” Thudor, Column 9, lines 51-56 and statement of motivation in Claim 1.) wherein the selecting of the one of the lenslet representation is based on the tracked direction of gaze of the user.” (The choice of dimensions, size, and quality of the streamed video portions is “determined or selected on the basis of the human FOV and bandwidth considerations” D’Acunto, Column 5, lines 46-52. Regarding Claim 8: “The method of claim 7, wherein the selecting of the one of the lenslet representation comprises selecting a lenslet representation with a sampling density above a sampling density threshold for a region of the light field video content that is located within a gaze threshold of the tracked direction of gaze of the user.” (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, (a) the density is a measure of content quality or data quantity of the sub-sampled region (such as spatial resolution of the original Claim 10), (b) content located within a gaze threshold can be content located within the field of view (or within a particular part of the field of view) of the user. Prior art teaches this embodiment: “the central part of the FOV may need the highest quality [above a threshold], while the peripheral parts of the FOV may be accommodated with lower quality [below a threshold] … use the manifest file to select different video tiles on the basis” See D’Aquinto, Column 5, lines 35-55. “) Regarding Claim 9: “The method of claim 1, further comprising: predicting a viewpoint of a user viewing the light field video content; and (“adapted to receive information on the viewpoint of the user of the video processing device. The viewpoint may be continuously updated by a viewpoint engine 1136 and provided to the client device.” D’Acunto, Column 26, lines 57-60.) adjusting the selected lenslet representation based on the predicted viewpoint.” (“The dimensions of the central and peripheral may be determined or selected on the basis of the human FOV, ” D’Acunto, Column 5, lines 46-47.) Regarding Claim 45: “The method of claim 43, wherein lenslet sub-views, of a lenslet representation of a region of the light field video content, are sampled from an array of lens let subviews representing the region based on an estimate indicating that the sampled lenslet sub-views provide the most accurate interpolation for the remaining lenslet sub-views in the array.” (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, sub-views that provide interpolation correspond to enhancement layers that provide interpolation information to the lower layers of an SVC video. See Specification, Figs. 24A-24C. Prior art teaches this embidoment: “By producing different quality layers for the onmidirectional content, and by dividing each quality layer in spatial subparts (tiles), the streaming client obtains the flexibility to download different spatial subparts at different qualities and provide the user with a high quality experience while minimizing bandwidth usage.” See D’Aquinto, Column 5, lines 35-55. See treatment of interpolation and SVC encoding in Claim 43.) Claim 46 is rejected for reasons stated for Claim 45 in view of the Claim 44 rejection. Claims 47-54 are rejected for reasons stated for Claims 2-9 in view of the Claim 44 rejection. Regarding Claim 55: “The method of claim 7, wherein the selecting of the one of the lenslet representations comprises: for a region of the light field video content that is located closer to the tracked direction of the gaze of the user, selecting a lenslet representation with a higher lenslet density than that of a region of the light field video content that is located farther from the tracked direction of the gaze of the user.” (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, this feature is directed to “sub-sampling” of a lenslet representation directed to reducing the content size of the lenslet representation as discussed in Specification, Paragraphs 8-10. This claim feature appears to be different than the “sampling” performed in Claim 43. Further, as noted in Specification, Paragraph 163, a higher lenslet density is related to a higher image resolution in areas around the user viewpoint. Prior art teaches an embodiment of the above feature. See methods of tracking gaze direction of the user in D’Acunto, Column 9, lines 65-67 and similarly in Thudor, Column 9, lines 53-56. Further, “The invention allows fast tile stream selection of tiles that need to be streamed to a client device, which is essential for providing a good quality of experience. … This way, on the basis of tiles of different resolution and different sizes a field of view may be constructed that comprises high resolution [higher lenslet density] video data in the center of the field of view and low resolution [sub-sampled lenslet at lower density] video data in the peripheral part of the field view and/or outside the field of view.” D’Acunto, Column 4, lines 32-62. Note that represented data can be lenslet data, as rejected in Claim 43. Also note that each spatial sub-part, such as a tile, can be downloaded at different quality. D’Acunto, Column 5, lines 39-45.) Claim 56 is rejected for reasons stated for Claim 55 in view of the Claim 52 rejection. Regarding Claim 57: “The method of claim 43, wherein the sampled lenslet sub-views of the lenslet representation are those sub-views of the lenslet representation that provide the most accurate interpolation of the remaining lenslet sub-views in the full array of lenslet sub-views.” (First, this claim is rejected for reasons stated for Claim 43, because it states an intended result (i.e. “most accurate”) without limiting Claim 43 to performing a particular step. Cumulatively, prior art indicates that sub-sampling is performed to preserve the information most important to quality, thus allowing the most accurate information to be recovered by up-sampling. See D’Acunto, Column 2, lines 54-67 and Column 4, lines 13-50, and Thudor, Column 24, lines 24-67.) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MIKHAIL ITSKOVICH whose telephone number is (571)270-7940. The examiner can normally be reached Mon. - Thu. 9am - 8pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Ustaris can be reached at (571)272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MIKHAIL ITSKOVICH/Primary Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Jan 19, 2022
Application Filed
Jan 19, 2022
Response after Non-Final Action
Jul 01, 2023
Non-Final Rejection — §103, §112
Oct 06, 2023
Response Filed
Oct 20, 2023
Final Rejection — §103, §112
Jan 23, 2024
Request for Continued Examination
Jan 29, 2024
Response after Non-Final Action
Feb 06, 2024
Non-Final Rejection — §103, §112
May 07, 2024
Response Filed
Jul 26, 2024
Final Rejection — §103, §112
Oct 31, 2024
Request for Continued Examination
Nov 03, 2024
Response after Non-Final Action
Nov 15, 2024
Non-Final Rejection — §103, §112
Feb 10, 2025
Response Filed
Mar 01, 2025
Final Rejection — §103, §112
May 19, 2025
Request for Continued Examination
May 25, 2025
Response after Non-Final Action
May 31, 2025
Non-Final Rejection — §103, §112
Sep 22, 2025
Response Filed
Dec 23, 2025
Final Rejection — §103, §112
Mar 17, 2026
Request for Continued Examination
Apr 01, 2026
Response after Non-Final Action
Apr 03, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12548733
Automating cryo-electron microscopy data collection
2y 5m to grant Granted Feb 10, 2026
Patent 12489911
IMAGE CODING METHOD, IMAGE DECODING METHOD, IMAGE CODING APPARATUS, RECEIVING APPARATUS, AND TRANSMITTING APPARATUS
2y 5m to grant Granted Dec 02, 2025
Patent 12477146
ENCODING AND DECODING METHOD, DEVICE AND APPARATUS
2y 5m to grant Granted Nov 18, 2025
Patent 12452404
METHOD FOR DETERMINING SPECIFIC LINEAR MODEL AND VIDEO PROCESSING DEVICE
2y 5m to grant Granted Oct 21, 2025
Patent 12432328
SYSTEM AND METHOD FOR RENDERING THREE-DIMENSIONAL IMAGE CONTENT
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
35%
Grant Probability
59%
With Interview (+23.8%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 585 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month