DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-8, 10-12, and 14 have been amended.
Claims 9 and 13 have been cancelled.
Claims 15-16 have been added.
Claims 1-8, 10-12, and 14-16 are pending for examination.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 02/04/2025; 12/19/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 4, 6, 10-11, 14-15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by PHILIPS (EP3979644A1).
Regarding claim 1, PHILIPS teaches a method for segmenting a plurality of data acquired by sensors, referred to as input data, said method being implemented by a segmenting device and comprising:
-determining weight values to be applied to the plurality of input data before the input data is processed (Fig. 1 & 2: block segmentation mask and block image data before processing) by at least one processing device configured to produce a processing result according to a criterion for optimising a quality of the input data processing result, said weight values being determined depending on said criterion and on another criterion for optimising a quantity of input data to be processed ([0029] The method may comprise: choosing a quality factor of the video compression algorithm such that at least the block segmentation masks are reconstructable without error from the at least one bitstream; and/or choosing a number of quantization levels used in the video compression algorithm such that at least the block segmentation masks are reconstructable without error from the at least one bitstream. Examiner note: weights are selected based on a quality factor and quantization levels),
-determining segmentation information of said plurality of input data, an item of the segmentation information of an item of the data of the plurality of input data being assigned a first value or a second value distinct from the first value, depending on said weight values (the segmentation may comprise classifying pixels as foreground or background by colour separation (colour keying) [0019]. [0045] The block segmentation mask 12 indicates whether a block of pixels 30 in the view 10 belongs to an area of interest 31 by setting a pixel value of each pixel in the block segmentation mask 12 to a first or second value. ), and
-obtaining a subset of data to be processed by applying the determined segmentation information to said plurality of input data, the subset of data to be processed comprising the data of the plurality of input data associated with an item of segmentation information equal to the first value ([0045] The block segmentation mask 12 indicates whether a block of pixels 30 in the view 10 belongs to an area of interest 31 by setting a pixel value of each pixel in the block segmentation mask 12 to a first or second value. ).
Regarding claim 6, the method for coding for a plurality of data acquired by sensors is rejected under the same arts and evidence used to reject the method for segmenting a plurality of data acquired by sensors of claim 1. PHILIPS further teaches the corresponding encoder (Fig. 1).
Regarding claim 10, the segmenting device for segmenting a plurality of data acquired by sensors is rejected under the same arts and evidence used to reject the method for segmenting a plurality of data acquired by sensors of claim 1. PHILIPS further teaches the: at least one processor; and at least one non-transitory computer readable medium ([0084]).
Regarding claim 11, the coding device for coding a plurality of data acquired by sensors is rejected under the same arts and evidence used to reject the method for segmenting a plurality of data acquired by sensors of claim 1. PHILIPS further teaches the: at least one processor; and at least one non-transitory computer readable medium ([0084]).
Regarding claim 14, PHILIPS teaches the non-transitory computer-readable medium comprising a computer program product stored thereon and comprising program code instructions for implementing the method according to claim 1 (See claim 1 rejection), when the instructions are executed by a processor of the segmenting device. ([0084]).
Regarding claim 15, PHILIPS teaches the non-transitory computer-readable medium comprising a computer program product stored thereon and comprising program code instructions for implementing the method according to claim 6 (See claim 1 rejection), when the instructions are executed by a processor of the coding device. ([0084]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2 and 4- 5 are rejected under 35 U.S.C. 103 as being unpatentable over PHILIPS in view of Tadross (US 20210383534 A1).
Regarding claim 2, PHILIPS teaches the method according to claim 1. PHILIPS does not explicitly teaches the following limitations, however, in an analogous art, Tadross teaches the determining the weight values comprises a learning said weight values from said plurality of input data, said learning being performed by backpropagation of a gradient of a loss function combining the criterion for optimising the quality and the other criterion for optimising the quantity ([0090] At operation 708, the weights and biases of the reduced depth CNN are updated based on the loss determined at operation 706. In some embodiments, the loss back propagated through the layers of the reduced depth CNN, and the parameters of the reduced depth CNN may be updated according to a gradient descent algorithm based on the back propagated loss.).
It would have been obvious for a person of ordinary skill in the art, before the effective filling date of the claimed invention, to take the teachings of Tadross and apply them to PHILIPS. One would be motivated as such to produce accurate segmentation maps of high resolution and image classifications of high accuracy, without consuming the computational resources or time of conventional CNNs (Tadross: [0005]).
Regarding claim 4, PHILIPS teaches the method according to claim 1, wherein said plurality of input data comprises a plurality of views acquired by a plurality of cameras (Fig. 2: 10 one or more views), one said views comprising pixels (Fig. 2: 13 block image data),
. PHILIPS does not explicitly teaches the following limitations, however, in an analogous art, Tadross teaches said weight values being comprised in a plurality of layers, one said layer is associated with one said view and comprising one said weight per pixel, and wherein the segmentation information comprises a plurality of segmentation maps, one said map being associated with one said view ([0047] Classification layer 224 receives as input third plurality of feature maps 210, and maps features represented therein to classification labels for each of the plurality of pixels of downsampled image 204.).
It would have been obvious for a person of ordinary skill in the art, before the effective filling date of the claimed invention, to take the teachings of Tadross and apply them to PHILIPS. One would be motivated as such to produce accurate segmentation maps of high resolution and image classifications of high accuracy, without consuming the computational resources or time of conventional CNNs (Tadross: [0005]).
Regarding claim 5, PHILIPS teaches the method according to claim 1. PHILIPS does not explicitly teaches the following limitations, however, in an analogous art, Tadross teaches wherein said plurality of input data comprises a plurality of sequences of measurement data acquired by a plurality of sensors, said weight values being comprised in a plurality of layers, one said layer being associated with one said sequence of measurement data and comprising one said weight per item of measurement data and wherein the segmentation information comprises a plurality of segmentation sequences, one said segmentation sequence being associated with one said sequence of measurement data (subsequent plurality of layers configured to map said identified features to one or more outputs, such as a segmentation map or image classification. Each convolutional layer comprises one or more convolutional filters, and each convolutional filter is “passed over”/receives input from, each sub-region of an input image, or preceding feature map, to identify pixel intensity patterns and/or feature patterns, which match the learned weights of the convolutional filter.).
It would have been obvious for a person of ordinary skill in the art, before the effective filling date of the claimed invention, to take the teachings of Tadross and apply them to PHILIPS. One would be motivated as such to produce accurate segmentation maps of high resolution and image classifications of high accuracy, without consuming the computational resources or time of conventional CNNs (Tadross: [0005]).
Allowable Subject Matter
Claims 3, and 7 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claims 8, 12 and 16 are allowed.
Reasons for Allowance
The following is an examiner’s statement of reasons for allowance:
The present invention is directed to a method and device video decoding for providing a plurality of decoded segmented input data and a modified values of weights to a processing device.
The combination of the prior art does not teach or suggest a specific implementation with the following distinct properties that include:
decoding coded data, said coded data comprising segmentation information of a plurality of data acquired by sensors, referred to as input data, to produce decoded segmentation information and a subset of decoded data to be processed by a processing device configured to apply weights to the plurality of input data and to produce a processing result depending on a criterion for optimising a quality of the processing, an item of the segmentation information of an item of the input data being assigned a first value or a second value distinct from the first value, said subset of data to be processed having been obtained by applying said segmentation information to the plurality of input data, the subset of data to be processed comprising the data of the plurality of input data associated with an item the of segmentation information equal to the first value, said coded data further comprising modified values of said weights, said modified values having been determined for processing the plurality of segmented input data, depending on the criterion for optimising a quality of the processing and on a criterion for optimising a quantity of data of the subset of data to be processed.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HESHAM K ABOUZAHRA whose telephone number is (571)270-0425. The examiner can normally be reached M-F 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached at 57127227384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HESHAM K ABOUZAHRA/Primary Examiner, Art Unit 2486