DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Objections
Claim 13 is objected to because of the following informalities: “The server of claim 2” recites an incorrect dependency. Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: processing element and encoder in claim 1, decoder in claim 8, encoder in claims 15, 16.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation: neural processing unit (NPU).
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 3, 5-7, 15, 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20220343523 A1 Cho; Soonyong et al. (hereafter Cho), and further in view of US 20230070216 A1 DO; Ji-Hoon et al. (hereafter Do).
Regarding claim 1, Cho discloses A camera device capable of communicating with a server (Fig.1), comprising: at least one processing element (PB) configured to obtain, from a captured video, a plurality of feature maps corresponding to each of convolution layers by using multiple convolution layers of an artificial neural network model (Figs.1, 7B, [30], [49], [109], processor circuity is the PB);
Cho fails to disclose an encoder configured to select at least one feature map from the plurality of feature maps, and encode the selected feature map thereby being outputting as a bitstream; wherein the bitstream is transmitted to the server through a communication network.
However, Do teaches an encoder configured to select at least one feature map from the plurality of feature maps (Fig.9, [187], [227]-[228], feature frame configuration unit and encoding unit together is a encoder is to select multi feature frame that represents feature map for encoding), and encode the selected feature map thereby being outputting as a bitstream ([148]); wherein the bitstream is transmitted to the server through a communication network (Fig.2, [97], compressed bitstream is transmitted from clients device to server).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the camera device disclosed by Cho to include the teaching in the same field of endeavor of Do, in order to improve compression efficiency, as identified by Do.
Regarding claim 3, Do teaches The camera device of claim 1, wherein the at least one feature map is selected based on a size of an input or output feature maps of each of the convolution layers ([20]).
Regarding claims 5, 19, Do teaches The camera device of claim 1, wherein the at least one feature map is selected based on particular information ([214]).
Regarding claims 6, 20, Do teaches The camera device of claim 5 wherein the particular information is received from the server ([376]-[377]).
Regarding claim 7, Do teaches The camera device of Claim 5, wherein the particular information includes identification information of the at least feature map or identification information of a layer corresponding to the at least one feature map ([376]-[377]).
Regarding claim 15, Cho discloses An electronic device for distributed processing of an artificial neural network (Fig.1), comprising: a multiply-accumulate (MAC) operator configured to perform operations of some convolution layers among a plurality of convolution layers of the artificial neural network thereby generating a plurality of output feature maps (Figs.7A-7B, [102]-[111]).
Cho fails to disclose an encoder configured to selectively encode at least one output feature map from the plurality of output feature maps thereby being outputted as a bitstream; wherein the electronic device is configured for distributed processing of the artificial neural network and includes a first electronic device and a second electronic device, wherein the electronic device refers to the first electronic device, and wherein the bitstream is transmitted to the second electronic device.
However, Do teaches an encoder configured to selectively encode at least one output feature map from the plurality of output feature maps thereby being outputted as a bitstream (Fig.9, [187], [227]-[228],[148]); wherein the electronic device is configured for distributed processing of the artificial neural network and includes a first electronic device and a second electronic device, wherein the electronic device refers to the first electronic device, and wherein the bitstream is transmitted to the second electronic device (Fig.2, [97]) .
Regarding claim 17, Cho discloses The electronic device of claim 15, wherein the electronic device includes a camera, a neural processing unit (NPU), a processor, or a server ([36]-[39]).
Regarding claim 18, Do teaches The electronic device of claim 15, wherein the at least one output feature map is selected based on weights of the convolution layers, attributes of input feature maps, or a size of the at least one output feature map ([20]-[23]).
Claim(s) 2, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cho, in view of Do, and further in view of US 20250310548 A1 ROSEWARNE; Christopher James et al. (hereafter ROSEWARNE).
Regarding claims 2, 16, ROSEWARNE teaches The camera device of claim 1, wherein the encoder includes: a multi-scale feature fusion (MSFF) and a single-stream feature codec (SSFC) encoder ([103]).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention having all the references Cho, Do and ROSEWARNE before him/her, to modify the camera device disclosed by Cho to include the teaching in the same field of endeavor of Do and ROSEWARNE, in order to improve compression efficiency, as identified by Do, provide a system for encoding and decoding tensors from a convolutional neural network, as identified by ROSEWARNE.
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cho, in view of Do, and further in view of US 20180286040 A1 SASHIDA.
Regarding claim 4, SASHIDA teaches The camera device of claim 1 wherein the at least one feature map is selected based on a sum value of weights for each of the convolution layers ([89]).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention having all the references Cho, Do and SASHIDA before him/her, to modify the camera device disclosed by Cho to include the teaching in the same field of endeavor of Do and SASHIDA, in order to improve compression efficiency, as identified by Do, provide technique that identifies the morphology of individual cells and/or biological substances inside the cells, as identified by SASHIDA.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20220375133 A1 WANG; Jing et al. (hereafter Wang), and further in view of Cho.
Regarding claim 8, Wang discloses A server capable of communicating with a camera device via a communication network (Fig.2b), comprising: a decoder configured to decode a bitstream received from the camera device ([204]) and reconstruct at least one feature map for a video captured by the camera device ([222], [300], decoder decodes encoded data and reconstructs feature map for camera captured image data in the encoded data); and at least one processing element (PE) configured to perform operations of an artificial neural network model (Figs.2, processing unit is the PE); wherein the at least one feature map is applied as input data to a N-th layer among a plurality of layers of the artificial neural network model, thereby allowing the at least one PE to perform operations of the artificial neural network (Fig.4, [211], [229]-[230]),
Wang fails to disclose the at least one feature map is an output feature map of a M-th layer among the plurality of layers, and wherein the N-th layer follows the M-th layer, wherein the N and the M are each integers.
However, Cho teaches wherein the at least one feature map is an output feature map of a M-th layer among the plurality of layers, and wherein the N-th layer follows the M-th layer ([211], [214]), wherein the N and the M are each integers (Fig.7B, [107]-[109]).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the server capable of communicating with a camera device disclosed by Wang to include the teaching in the same field of endeavor of Cho, in order to obtain accurate depth information at low illuminance, as identified by Cho.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang, in view of Cho, and further in view of US 20250310548 A1 ROSEWARNE.
Regarding claim 9, ROSEWARNE teaches The sever of claim 8, wherein the decoder includes a multi-scale feature reconstruction (MSFR) and a single-stream feature codec (SSFC) decoder ([103]).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention having all the references Wang, Cho and ROSEWARNE before him/her, to modify the server capable of communicating with a camera device disclosed by Wang to include the teaching in the same field of endeavor of Cho and ROSEWARNE, in order to obtain accurate depth information at low illuminance, as identified by Cho, provide a system for encoding and decoding tensors from a convolutional neural network, as identified by ROSEWARNE.
Claim(s) 10, 12-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang, in view of Cho, and further in view of Do.
Regarding claim 10, Do teaches The server of claim 8, wherein the at least one feature map is selected based on a size of the output feature map for each of convolution layers ([20]).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention having all the references Wang, Cho and Do before him/her, to modify the server capable of communicating with a camera device disclosed by Wang to include the teaching in the same field of endeavor of Cho and Do, in order to obtain accurate depth information at low illuminance, as identified by Cho, improve compression efficiency, as identified by Do.
Regarding claim 12, Do teaches The server of claim 8, wherein the at least one feature map is selected based on particular information ([214]).
Regarding claim 13, Do teaches The server of claim 2, wherein the particular information is received from the server ([376]-[377]).
Regarding claim 14, Do teaches The server of claim 13, wherein the particular information includes identification information of the at least feature map or identification information of a layer corresponding to the at least one feature map ([376]-[377]).
Claim(s) 11, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang, in view of Cho, and further in view of SASHIDA.
Regarding claim 11, SASHIDA teaches The server of claim 8, wherein the at least one feature map is selected based on a sum value of weights for each of convolution layers ([89]).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention having all the references Wang, Cho and SASHIDA before him/her, to modify the server capable of communicating with a camera device disclosed by Wang to include the teaching in the same field of endeavor of Cho and SASHIDA, in order to obtain accurate depth information at low illuminance, as identified by Cho, provide technique that identifies the morphology of individual cells and/or biological substances inside the cells, as identified by SASHIDA.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 20210265016 A1, US 20190197420 A1, US 20230421764 A1, US 20240202590 A1.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TRACY Y. LI whose telephone number is (571)270-3671. The examiner can normally be reached Monday Friday (8:30 AM- 4:30 PM) EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at (571) 272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TRACY Y. LI/Primary Examiner, Art Unit 2487