DETAILED ACTION
This communication is being filed in response to the submission having a mailing date of (09/15/2025) in which a three (3) month Shortened Statutory Period for Response has been set.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Acknowledgements
3. Upon new entry, claims (60 -79) remain pending for examination, of which claims (60, 74, 79) being the three (3) parallel-running independent claims on record.
Examiner thanks’ Applicant representative (Atty. C. Dixon; Reg. No, 79,927) for the
detailed remarks and clarifications, and for the cooperation expediting the case.
Claims (60-79) remain rejected under the previously recorded 35 USC 103 rejection.
Response to Applicant’s arguments
Applicant’s arguments have been carefully considered, but they’re not persuasive, because no allowable subject matter has been identified yet, and for at least the following reasons:
5.1. The Examiner undersigned considers that the previously presented combined prior art (PA) on record, very well discloses all the features and limitation as claimed – a 2D/3D panoramic codec system, a comprising an image pipeline, a coding technique and bilateral filtering/deblocking techniques, in accordance with the AVC format, able to generate depth maps for each camera views, using reprojected views from adjacent cameras, and pixel depth refinement between different levels, and able to determine discontinuity based on processing information; … which for the most part, were part of the common knowledge at the time of the invention.
5.2. Examiner would stress that the no allowable subject matter has been yet identified in the claims. The claims language instead lists a set of well-known feature techniques, for codec and filtering applications, commonly used and well documented, way before the invention was filed/made.
5.3. Examiner notes that Applicant lists plurality of well-known techniques following the term(s) “wherein/including/existing/indicating/adapting/ …etc” that passively indicates that a function is performed without requiring the functional structure/methodology as a limitation on the claim itself. It is clear that such claim language does not further limit the claims, and does not require a separate reason for rejection; (see MPEP 2111.04). The clause may be given some weight to the extent it provides "meaning and purpose” to the claim invention, but not when “it simply expresses the intended result” of the invention.
5.4. More specifically, Applicant argues:
5.4.1. Applicant argues the basics of Briggs: [.,determine whether a depth discontinuity exists; (Remarks; page 7)]; Examiner respectfully disagrees, because under the broadest reasonable interpretation consistent with the instant specification and the common knowledge of one of ordinary skill in the art, at least Briggs specifically discloses the an analogous determination step in at least - a 3D panorama multi-camera system (Fig. 7), employing compression and deblocking filtering techniques of the same, in accordance with the codec standards [Briggs; 12: 10], similarly able to determine and employ depth information, with or without discontinuity around objects, as in details shown in Figs. (31 -32); [Briggs; Cols. (12 -13) and 32].
5.4.2. Applicant argues a failure to disclose […filter adaptation of the target video-block based on depth information; (Remarks; page 8)]; Examiner also disagreed, because under the same BRI doctrine at least Briggs discloses: filters blurs a depth map for an image based on a machine-learned set of depth transform parameters on the image; [3: 10 and 23: 61].
In this regard Briggs discloses - plurality of examples associated with filter adaptation (i.e. feedback), via filtering the obtain depth information (Figs. 22, 23, 27) associated with the current video block, with and/or without continuity; [Briggs; Cols. 12 -13, 32]; executed in the “depth calculation unit” (716), Fig. 7; [Briggs; 11: 33]); using/associated with the “memory management” operations of (Fig. 10).
Briggs employes - filter adaptation based on depth map information, in for example Fig. 23 wherein weighted feedback is signaled (i.e. see feedback 2380).
Similarly, Briggs teaches the same functionality - by apply a training algorithm, that may modify and signaling weights (i.e. depth and/or other parameters (2720)), imputed to the filter unit, in order to improve the filtering (2740) operations, Fig. 27; [Briggs; 28: 25]).
5.4.2. Regarding the rationale and motivation of the mapped claims, please refer to the Rejection section (6) below.
Finally, the Office considers Applicant's arguments not persuasive, as applied rejection on record as a whole reads on the claimed construction, establishing the "Prima Facie" case of equivalent disclosures, on the basis of a person of ordinary skills in the art would have recognized the similar elements shown, or the same structural similarities shown, wherein such structure/methodology performs the same identical functions in substantially the same way, able to produce the same identical results.
_ See [MPEP – 2183]. Making a Prima Facie Case of Equivalence).
_ See In re Bond, 910 F.2d 831, 833, 15 USPQ2d 1566, 1568]; …also when similar structure applies;
_ See Kemco Sales, Inc. vs. Control Papers., 208 F.3d 1352, 54 USPQ2d 1308] …when identical functionality is specified in the claim, in substantially the same way.
Claim rejection section
6. This is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained through the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negative by the manner in which the invention was made.
6.1. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966) that applied for establishing a background for determining obviousness under 35 USC 103(a) are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness
6.2. Claims (60 -79) stand herein rejected 35 U.S.C. 103(a) under the same ground -as being unpatentable over Hu; et al. “Trained Bilateral Filters and Applications to Coding Artifacts Reduction”; hereafter “Hu”, in view of Briggs et al. US 10,262,238 B2, (“Briggs”).
Claim 60. (Presented) Hu discloses the invention substantially as claimed - A video decoding device, comprising: (e.g. a codec implementation in accordance with the h.264/AVC standard, (that by definition is able to process 2D and/or 3D/depth image/video data, emphasis added), able to satisfy artifact/noise reduction requirements, as shown in Figs. (1 and 4); [chap. (3-4)], also employing trained bilateral/deblock filters techniques, as illustrated in at least Fig. 2; [chap. (2 -3)]).
Given the teachings of Hu; et al. as a whole, and under the obvious assumption and purpose of his papers, it is noted that some of the functional steps/components as listed (i.e. no encoder/decoder schematic disclosed), are missed or not fully described in the papers.
For the purpose of additional clarification, and in the same field of endeavor, Briggs; et al discloses a 3D capable multi-camera system (Figs. 1, 28), in accordance with the well-known compression standards [Briggs;12: 10], with similar architectural support from at least Fig. (7), able to eliminate/reduce artifact/noise in the process, comparing/difference (2230) the two 3D components (i.e. left & right images), as shown in Figs (22 -23), similarly employing depth information, with or without discontinuity around objects, as shown in Figs. (31 -32); [Briggs; 12: 50 -13: 10; and Col. 32].
More specifically Briggs discloses - a processor configured to: (e.g. processing unit (714), Fig. 7; [Briggs; 11: 33]); obtain video data that includes a current video block; (e.g. see video camera sensor (700), Fig. 7; [Briggs; 20: 36]); obtain depth information associated with the current video block; (e.g. see depth calculation unit (716), Fig. 7; [Briggs; 11: 33]);
determine whether a depth discontinuity exists in the current video block based on the depth information, (e.g. see Figs. (31-32), Fig. 7; [Briggs; 11: 33; Col. 32]) based on a determination that a depth discontinuity exists in the current video block, (e.g. depth flag (930) for continuity check, Figs. (9 -10); [Briggs; 12: 50 -13: 10; Col. 32]);
adapt a filtering operation associated with the current video block based at least on the depth information; (e.g. and apply deblock filtering accordantly (1060) based on depth map information (1050); as simulated in Figs. 10 A/B; [Briggs; 13: 35]);
and process the current video block based at least on the adapted filtering operation; (e.g. see image processing of the video sequence (950), Figs. (9 -10); [Briggs; 12: 65]).
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Hu’s papers with the multi-camera codec system of Briggs, in order to provide (e.g. a multi-view environment, able to similarly capture & create panoramic video/images in form of “omni-directional," "360-degree" or "spherical" type content, also employing a pipeline architecture to generate a depth map for the image to effectively permit generation of consistently synthetic views, across overlapping (or not) camera views, effectively providing a depth estimate for pixels in the images, and also able to accurately change depth across frames and between objects and backgrounds if required; [Briggs; Cols. 1-2]).
Claim 61. (Presented) Hu/Briggs discloses - The video decoding device of claim 60, wherein the video data includes an indication indicating that a depth discontinuity exists in the current video block, wherein the processor being configured to determine whether a depth discontinuity exists in the current video block is further based on the indication; (e.g. see depth flag (930) for continuity check, Figs. (9 -10); [Briggs; 12: 50 -13: 10]; the same motivation applies herein.)
Claim 62. (Presented) Hu/Briggs discloses - The video decoding device of claim 60, wherein the processor being configured to determine whether a depth discontinuity exists in the current video block is further based on a depth component available at the video decoding device; (e.g. depth flag (930) for continuity check, Figs. (9 -10); [Briggs; 12 -13]; the same motivation applies herein.)
Claim 63. (Presented) Hu/Briggs discloses - The video decoding device of claim 62, wherein the processor is further configured to determine, based on the depth component available at the video decoding device, at least one of a position or a direction of the depth discontinuity in the current video block; (e.g. see continuity depth map, including position and direction, as shown in Fig. 10 A/B; [Briggs; 12: 50 -13: 10]; the same motivation applies herein.)
Claim 64. (Presented) Hu/Briggs discloses - The video decoding device of claim 60, wherein the filtering operation is associated with at least one of a bilateral filter (BLF), a deblocking filter (DBF), an adaptive loop filter (ALF), a sample adaptive offset (SAO) filter, or a cross-component sample adaptive offset (CC-SAO) filter; (e.g. H.264/AVC deblock techniques supported in the standard; [Hu; page 1].)
Claim 65. (Presented) Hu/Briggs discloses - The video decoding device of claim 64, wherein the filtering operation is associated with the DBF, (e.g. see deblocking filtering technique in accordance with the compression standards; [Hu; Chap. 4.3])
and wherein the processor being configured to adapt the filtering operation associated with the current video block based at least on the depth information comprises the processor being configured to: (e.g. training adaptive bilateral filter and/or In-loop filters in the process; [Hu; Chap. 4.3; 5]); determine a depth difference between a first sample and a second sample based on the depth information; (e.g. spatial differences between pixel samples is determined, filtered and trained in at least [HU; Chap. 2]);
and when applying the DBF to the first sample based on the second sample, determine a strength of the DBF based on the depth difference between the first sample and the second sample; (e.g. see deblock filter (i.e. ILF) strength/tap parameters adjustment, using BS/QP parameters, in accordance with the AVC codec; [Hu; Chap. 4.3]).
Claim 66. (Presented) Hu/Briggs discloses - The video decoding device of claim 65, wherein the strength of the DBF is set at a first value if the depth difference between the first sample and the second sample is greater than a threshold value, and wherein the strength is set at a second value if the depth difference between the first sample and the second sample is equal to or less than the threshold value; (e.g. error refinement (i.e. difference or disparity) between samples is implemented based on plurality of predetermined threshold parameters associated with depth; [Briggs; Col. 17: 65; 21: 17; 29: 28; 32: 50]; the same motivation applies herein.)
Claim 67. (Presented) Hu/Briggs discloses - The video decoding device of claim 66, wherein a contribution of the second sample to the filtering operation is adjusted such that the contribution is inversely proportional to the depth difference between the first sample and the second sample; (e.g. spatial differences between pixel samples is determined, filtered and trained in at least [HU; Chap. 2]);
Claim 68. (Presented) Hu/Briggs discloses - The video decoding device of claim 64, wherein the filtering operation is associated with the BLF, and wherein the processor being configured to adapt the filtering operation associated with the current video block based at least on the depth information comprises the processor being configured to: (e.g. see BLF similarly used in a standard AVC codec implementation; [HU; Chap. 1]) determine a depth difference between a first sample and a second sample based on the depth information; (e.g. spatial differences between pixel samples is determined, filtered and trained in at least [HU; Chap. 2]);
and when applying the BLF to the first sample based on the second sample, (e.g. filter is applied to the estimated difference (2230) between the involved images; Fig. (22-23); [Briggs; 24: 37) determine a contribution of the second sample to the filtering operation based on the depth difference between the first sample and the second sample; (e.g. apply deblock filtering accordantly (1060) based on depth map information (1050); as simulated in Figs. 10 A/B; [Briggs; 13: 35]);
Claim 69. (Presented) Hu/Briggs discloses - The video decoding device of claim 64, wherein the filtering operation is associated with the SAO, (e.g. see AVC standard configuration of the deblock techniques, that by definition includes SAO filters; [HU; Chap. 4.3.]) and wherein the processor being configured to adapt the filtering operation associated with the current video block based at least on the depth information comprises the processor being configured to: (e.g. depth flag (930) for continuity check, Figs. (9 -10); [Briggs; 12: 50 -13: 10]); determine, based on the depth information, a depth difference between a first sample and a second sample; and when applying the SAO to the first sample based on the second sample, determine a filtering offset to be applied to the first sample based on the depth difference between the first sample and the second sample; (e.g. see H.264/AVC deblocking techniques supported in the standard; [Hu; page 1; Chap. 4].)
Claim 70. (Presented) Hu/Briggs discloses - The video decoding device of claim 64, wherein the filtering operation is associated with the CC-SAO, wherein the depth information associated with the current video block includes a depth component associated with the current video block, and wherein the processor being configured to adapt the filtering operation based at least on the depth information comprises the processor being configured to: determine a filtering offset associated with the CC-SAO based on the depth component; (e.g. see H.264/AVC deblocking techniques supported in the standard; [Hu; page 1; Chap. 4].)
Claim 71. (Presented) Hu/Briggs discloses - The video decoding device of claim 64, wherein the filtering operation is associated with the ALF, and wherein the processor being configured to adapt the filtering operation based at least on the depth information comprises the processor being configured to: determine one or more classification parameters associated with the ALF based on the depth information; (e.g. see H.264/AVC deblocking techniques supported in the standard; [Hu; page 1; Chap. 4].)
Claim 72. (Presented) Hu/Briggs discloses - The video decoding device of claim 60, wherein the filtering operation is an in-loop filtering operation or an out-of-loop filtering operation; (e.g. see similar “in-loop filter” implementation in accordance with the H264/AVC codec; [Hu; page 1; Chap. 4].)
Claim 73. (Presented) Hu/Briggs discloses - The video decoding device of any of claim 60, wherein the processor is further configured to: obtain motion information associated with the current video block; and adapt the filtering operation associated with current video block further based on the motion information; (e.g. see depth-map; inter-frame prediction (i.e. Motion estimation/compensation) implemented herein; [Briggs; 1: 60]; the same motivation applies herein.
Claim 74. (Presented) Hu/Briggs discloses - A method implemented by a video decoding device, the method comprising: obtaining video data that includes a current video block; obtaining depth information associated with the current video block;
determining whether a depth discontinuity exists in the current video block based on the depth information; based on a determination that a depth discontinuity exists in the current video block, adapting a filtering operation associated with the current video block based at least on the depth information; and processing the current video block based at least on the adapted filtering operation. (Current lists all the same elements as recite in Claim (60) above, but in “Method” form instead, and is/are therefore on the same premise.)
Claim 75. (Presented) Hu/Briggs discloses - The method of claim 74, wherein obtaining the depth information associated with the current video block comprises determining, based on a depth component available at the video decoding device, whether a depth discontinuity exists in the current video block; with or without discontinuity around objects, as shown in Figs. (31 -32); [Briggs; 12: 50 -13: 10; and Col. 32]; the same motivation applies herein.
Claim 76. (Presented) Hu/Briggs discloses - The method of claim 75, further comprising determining, based on the depth component available at the video decoding device, at least one of a position or a direction of the depth discontinuity in the current video block. (The same rationale/motivation apply as given to Claim (63) above.)
Claim 77. (Presented) Hu/Briggs discloses - The method of claim 74, wherein the filtering operation is associated with at least one of a bilateral filter (BLF), a deblocking filter (DBF), an adaptive loop filter (ALF), a sample adaptive offset (SAO) filter, or a cross-component sample adaptive offset (CC-SAO) filter; (e.g. see H.264/AVC deblock techniques supported in the standard; [Hu; page 1; Chap. 4].)
Claim 78. (Presented) Hu/Briggs discloses -The method of claim 74, further comprising: obtaining motion information associated with the current video block, wherein the filtering operation associated with current video block is adapted further based on the motion information. (The same rationale/motivation apply as given to Claim (73) above.)
Claim 79. (Presented) Hu/Briggs discloses - A computer program product which is stored on a non-transitory computer readable medium and comprises program code instructions for implementing the method of claim 74 when executed by a processor. (Current lists all the same elements as recite in Claims (60 and 74) above, but in “non-transitory CRM product” form instead, and is/are therefore on the same premise.)
Prior Art Citations
7. The following List of prior art, made of record and not relied upon, is/are considered pertinent to applicant's disclosure:
7.1. Patent documentation:
US 10,262,238 B2 Briggs; et al. G06V10/147; G06V30/19147;
US 11,972,561 B2 Lampros; et al. G06T7/11; G06T7/174; G06T7/0012;
US 11,562,468 B2 Brownlee; et al. G06T5/70; G06T5/20; G06T5/60;
US 11,532,073 B2 Vogels; et al. G06T5/50; G06T5/70; G06N7/01;
US 11,503,286 B2 Lim; et al. H04N19/117; H04N19/122; H04N19/70;
7.2. Non-Patent Literature:
_ Fast depth image denoising and enhancement using deep convolutional network - 2016;
_ Trained Bilateral Filters and Applications to Coding Artifacts Reduction; Hu - 2007;
_ Learning sparse high dimensional filters; Jampani - 2016;
CONCLUSIONS
8. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP 5 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.1 36(a), A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
9. Any inquiry concerning this communication or earlier communications from the
examiner should be directed to LUIS PEREZ-FUENTES (luis.perez-fuentes@uspto.gov)
whose telephone number is (571) 270 -1168. The examiner can normally be reached on
Monday-Friday 8am-5pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, WILLIAM VAUGHN can be reached on (571) 272-3922. The fax phone number for the organization where this application or proceeding is assigned is (571) 272 -3922. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated system, please call (800) 786 -9199 (USA OR CANADA) or (571) 272 -1000.
/LUIS PEREZ-FUENTES/
Primary Examiner, Art Unit 2481.