Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 16-17 and 26-27 are amended. Claims 16-35 are currently pending and have been considered by the Examiner.
Claim Objections
Claim 26 is objected to because of the following informalities: Line 6 appears to be improperly indented. The terms “with a pre-trained” in lines 5-6 should be part of the same limitation. In line 8, “that” should recite “than”. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 16-35 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 16-25 each recites a method, and claims 26-35 each recites a device comprising a transceiver. A method and a device comprising a transceiver each fall under one of the four statutory categories of patent-eligible subject matter.
Claim 16
Step 2A Prong 1: Adapting the at least the portion of the second [model]
Step 2A Prong 2: A target device amounts to a generic computer component for applying the abstract ideas on a generic computer under MPEP 2106.05(f).
Receiving, from a device on a network, metadata associated with a pre-trained first neural network, wherein the metadata comprises information obtained based on a training of the pre-trained first neural network performed at the device amounts to mere data-gathering, an insignificant extra-solution activities under MPEP 2106.05(g).
The device on the network being a different device than the target device amounts to generic computer components for applying the abstract ideas on a generic computer under MPEP 2106.05(f).
Obtaining at least a portion of a second neural network from memory at the target device amounts to an insignificant extra-solution activities under MPEP 2106.05(g). A second neural network amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f).
Performing an inference during deployment of the adapted second neural network amounts to invoking computers merely as a tool to perform an existing process under MPEP 2106.05(f).
The additional elements as disclosed above, alone or in combination, do not integrate the abstract ideas into a practical application as they are mere insignificant extra solution activities as disclosed in combination with generic computer functions that are implemented to perform the abstract ideas disclosed above. The claim is directed to an abstract idea.
Step 2A Prong 2: A target device amounts to a generic computer component for applying the abstract ideas on a generic computer under MPEP 2106.05(f).
Receiving, from a device on a network, metadata associated with a pre-trained first neural network, wherein the metadata comprises information obtained based on a training of the pre-trained first neural network performed at the device is a well-understood, routine, conventional activity recognized by the courts under MPEP 2106.05(d)(II) (Receiving data over a network).
The device on the network being a different device than the target device amounts to generic computer components for applying the abstract ideas on a generic computer under MPEP 2106.05(f).
Obtaining at least a portion of a second neural network from memory at the target device is a well-understood, routine, conventional activity recognized by the courts under MPEP 2106.05(d)(II) (Retrieving information from memory).
A second neural network amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f).
Performing an inference during deployment of the adapted second neural network amounts to invoking computers merely as a tool to perform an existing process under MPEP 2106.05(f).
The additional elements as disclosed above, in combination with the abstract ideas, are not sufficient to amount to significantly more than the abstract ideas as they are well-understood, routine and conventional activities as disclosed in combination with generic computer functions that are implemented to perform the abstract ideas disclosed above. The claim is not patent eligible.
Claim 17 incorporates the rejection of claim 16.
Step 2A Prong 1: The abstract ideas of claim 16 are incorporated.
Step 2A Prong 2 and Step 2B: Said metadata comprises at least one of: at least one batch size, at least one optimizer, at least one drop-out, at least one learning rate, a designation or a parameter of at least one loss function, at least one performance indicator related to an accuracy of training, at least one indicator related to an importance of at least one weight of at least one layer in the first neural network or the second neural network, at least one type of information related to pre-processing performed on at least one element of a training set, or at least one type of information representative of at least one position inside the first neural network where a prediction can be made amounts to a field of use and technological environment under MPEP 2106.05(h). The claim is not patent eligible.
Claim 18 incorporates the rejection of claim 16.
Step 2A Prong 1: The abstract ideas of claim 16 are incorporated. Adapting the second [model]
Step 2A Prong 2 and Step 2B: The second neural network amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f). The claim is not patent eligible.
Claim 19 incorporates the rejection of claim 16.
Step 2A Prong 1: The abstract ideas of claim 16 are incorporated. The adapting comprises: splitting at least a portion of the second [model]
Step 2A Prong 2: The second neural network amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f).
Transmitting at least one split portion of the second neural network amounts to mere data-gathering, an insignificant extra-solution activity under MPEP 2106.05(g).
Step 2B: The second neural network amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f).
Transmitting at least one split portion of the second neural network is analogous to transmitting data over a network, which the courts have recognized as a well-understood, routine, conventional activity under MPEP 2106.05(d)(II). The claim is not patent eligible.
Claim 20 incorporates the rejection of claim 16.
Step 2A Prong 1: The abstract ideas of claim 16 are incorporated. Said adapting comprises pre-processing at least a part of a training data set based on the metadata, wherein the training data set is configured for use during a training process for fine-tuning the second neural network is a judgement and evaluation mental process which can reasonably be performed in the human mind with the aid of pencil and paper. A human mind can reasonably pre-process training data based on metadata with the aid of pencil and paper or a computer.
Step 2A Prong 2 and Step 2B: The claim does not recite any abstract ideas which, alone or in combination, would integrate the abstract ideas into a practical application or which, in combination with the abstract ideas, would be sufficient to amount to significantly more than the abstract ideas. The claim is not patent eligible.
Claim 21 incorporates the rejection of claim 16.
Step 2A Prong 1: The abstract ideas of claim 16 are incorporated. Decoding the compressed metadata is a judgement and evaluation mental process which can reasonably be performed in the human mind with the aid of pencil and paper. A human mind can reasonably perform decoding with aid of pencil and paper or a computer.
Step 2A Prong 2: The metadata is received as compressed metadata that comprises compression of the metadata associated with multiple layers of the first neural network or parameters of at least one tensor associated with the multiple layers of the first neural network amounts to mere data-gathering, an insignificant extra-solution activity under MPEP 2106.05(g).
The target device comprises a decoder amounts to a mere field of use and technological environment under MPEP 2106.05(h).
Using the decoder amounts to mere instructions to apply the abstract ideas at a generic computer under MPEP 2106.05(f).
Step 2B: The metadata is received as compressed metadata that comprises compression of the metadata associated with multiple layers of the first neural network or parameters of at least one tensor associated with the multiple layers of the first neural network is analogous to receiving data over a network, which is a well-understood, routine, conventional activity recognized by the courts under MPEP 2106.05(d)(II).
The target device comprises a decoder amounts to a mere field of use and technological environment under MPEP 2106.05(h).
Using the decoder amounts to mere instructions to apply the abstract ideas at a generic computer under MPEP 2106.05(f). The claim is not patent eligible.
Claim 22 incorporates the rejection of claim 16.
Step 2A Prong 1: The abstract ideas of claim 16 are incorporated. The adapting is based on the metadata to meet one or more requirements is a judgement and evaluation mental process which can reasonably be performed in the human mind with the aid of pencil and paper.
Step 2A Prong 2 and Step 2B: The one or more requirements comprise at least one of an accuracy requirement, an energy requirement, a computational requirement, or a memory requirement amounts to a field of use and technological environment under MPEP 2106.05(h). The claim is not patent eligible.
Claim 23 incorporates the rejection of claim 16.
Step 2A Prong 1: The abstract ideas of claim 16 are incorporated. The adapting comprises picking a sub-part of the second [model]
Step 2A Prong 2 and Step 2B: The second neural network amounts to mere instructions to apply the abstract ideas at a generic computer under MPEP 2106.05(f). The claim is not patent eligible.
Claim 24 incorporates the rejection of claim 16.
Step 2A Prong 1: The abstract ideas of claim 16 are incorporated. The adapting is performed based on the indication from the user is a judgement and evaluation mental process which can reasonably be performed in the human mind with the aid of pencil and paper.
Step 2A Prong 2: Providing a portion of the metadata to a user on a user interface, and receiving an indication from the user in response to the portion of the metadata being provided amount to insignificant extra-solution activities under MPEP 2106.05(g).
Step 2B: Providing a portion of the metadata to a user on a user interface, and receiving an indication from the user in response to the portion of the metadata being provided are analogous to presenting offers and gathering statistics, which are recognized by the courts as well-understood, routine, conventional activities under MPEP 2106.05(d)(II). The claim is not patent eligible.
Claim 25 incorporates the rejection of claim 16.
Step 2A Prong 1: The abstract ideas of claim 16 are incorporated.
Step 2A Prong 2 and Step 2B: The metadata indicates an importance of one or more weights in at least one of the first neural network or the second neural network amounts to a field of use and technological environment under MPEP 2106.05(h). The claim is not patent eligible.
Claim 26 recites a device comprising a transceiver which implements the abstract ideas from method claim 16 and is therefore rejected for at least the same reasons.
In Step 2A Prong 2 and Step 2B, a transceiver; a memory; and a processor amount to generic computer components for applying the abstract ideas on a generic computer under MPEP 2106.05(f). The claim is not patent eligible.
Claims 27-35 each recites a device comprising a transceiver which implements the abstract ideas from method claims 17-25, respectively, and are therefore rejected for at least the same reasons.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 16-20, 22-23, 26-30, and 32-33 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Mun et al. (US 20190088251 A1).
Regarding claim 16, Mun teaches: A method performed by a target device, the method comprising: receiving, from a device on a network, metadata associated with a pre-trained first neural network, wherein the metadata comprises information obtained based on a training of the pre-trained first neural network performed at the device, wherein the device on the network is a different device than the target device; ([0063], lines 1-14; [0074], lines 1-6; [0079], lines 1-6; and [0104], from line 15 to “performed” in line 24 discloses a system comprising a speech signal recognition server (220, 520) and a device (210, 510, hereinafter called a user device). The server trains a global neural network model and sends the global neural network model to a user device. The limitation “a device” corresponds to Mun’s speech signal recognition server, the limitation “a target device” corresponds to Mun’s user device, and the limitation “metadata associated with a pre-trained first neural network” corresponds to trained parameters of the global model.)
obtaining at least a portion of a second neural network from memory at the target device; ([0034], lines 1-4; page 5, left column, lines 10-13; [0060], lines 9-16; [0104], from line 15 to “performed” in line 24 teaches the user device includes a memory for reading trained parameters of a neural network. A portion of a second neural network includes layers of the global model stored in memory of the user device.)
adapting the at least the portion of the second neural network based on the metadata associated with the pre-trained first neural network; and ([0104], lines 15-20)
performing an inference during deployment of the adapted second neural network. (Fig. 3 and [0089]-[0090]. The adapted second neural network is deployed on the target device because at least the personalization layer is deployed by the user device.)
Regarding claim 17, Mun teaches: The method of claim 16, wherein said metadata comprises at least one of: at least one batch size, at least one optimizer, at least one drop-out, at least one learning rate, a designation or a parameter of at least one loss function, at least one performance indicator related to an accuracy of training, at least one indicator related to an importance of at least one weight of at least one layer in the first neural network or the second neural network, at least one type of information related to pre-processing performed on at least one element of a training set, or at least one type of information representative of at least one position inside the first neural network where a prediction can be made. (Fig. 3 and [0090] discloses output layer 324 outputs a recognition result value of the acoustic model. This layer is “at least one type of information representative of at least one position inside the first neural network where a prediction can be made.”)
Regarding claim 18, Mun teaches: The method of claim 16, wherein adapting the second neural network comprises compressing, pruning, or dropping at least a portion of the second neural network based on the metadata. ([0104], from line 15 to the end of the paragraph discloses training the personalization layer in combination with the global model at the user device (510), and then discarding the global model from the user device after the training of the personalization layer. Discarding the global model from the user device indicates dropping these layers from the combined model at the user device.)
Regarding claim 19, Mun teaches: The method of claim 16, wherein the adapting comprises: splitting at least a portion of the second neural network based on the metadata; and ([0104], from line 15 to the end of the paragraph discloses training the personalization layer in combination with the global model at the user device (510), and then discarding the global model from the user device after the training of the personalization layer. Discarding the global model indicates splitting these layers from the combined model at the user device.)
transmitting at least one split portion of the second neural network. ([0061], lines 21 to “device” in line 31 discloses once the personalization layer parameters have been trained, they are stored in memory of the device and then loaded into the processor of the device. Loading parameters from memory to a processor amounts to transmitting parameters from memory to the processor.)
Regarding claim 20, Mun teaches: The method of claim 16, wherein said adapting comprises pre-processing at least a part of a training data set based on the metadata, ([0081]-[0082] where pre-processing includes selecting speech signals determined to be suitable for training the personalization layer. Since the determination is based on an incorrect result, and the output layer of the global model outputs the recognition result, the determination would be based at least on the trained parameters of the global model (metadata).)
wherein the training data set is configured for use during a training process for fine-tuning the second neural network. ([0104], from line 15 to “performed” in line 24 teaches training the personalization layer at the user device.)
Regarding claim 22, Mun teaches: The method of claim 16, wherein the adapting is based on the metadata to meet one or more requirements, wherein the one or more requirements comprise at least one of an accuracy requirement, ([0081]-[0082] teaches fine-tuning the personalization layer when it, in combination with the global model, outputs an incorrect result.)
an energy requirement, a computational requirement, or a memory requirement.
Regarding claim 23, Mun teaches: The method of claim 16, wherein the adapting comprises picking a sub-part of the second neural network or selecting a set of parameter settings for the second neural network based on the metadata. ([0081]-[0082] teaches fine-tuning the personalization layer when it, in combination with the global model, outputs an incorrect result. [0089], lines 1-7 discloses the personalization layer comprises connection weights.)
Regarding claim 26, Mun teaches: A target device comprising: a transceiver; ([0034] discloses a terminal/device (hereinafter called a user device). The transmitting in Fig. 3 and [0090], lines 1-3 from the user device and the receiving a global model in [0104], lines 15-17 is evidence of a the user device comprising a transceiver.)
a memory; and ([0034], line 1 to 4)
a processor configured to: ([0034], line 1 to “processor” in line 5)
receive, via the transceiver from a device on a network, metadata associated with a pre-trained first neural network, wherein the metadata comprises information obtained based on a training of the pre-trained first neural network performed at the device, wherein the device on the network is a different device that the target device; ([0063], lines 1-14; [0074], lines 1-6; [0079], lines 1-6; and [0104], from line 15 to “performed” in line 24 discloses a system comprising a speech signal recognition server (220, 520) and a device (210, 510, hereinafter a user device). The server trains a global neural network model and sends the global neural network model to a user device. The limitation “a device” corresponds to Mun’s speech signal recognition server, the limitation “a target device” corresponds to Mun’s user device, and the limitation “metadata associated with a pre-trained first neural network” corresponds to trained parameters of the global model.)
obtain at least a portion of a second neural network from the memory; ([0034], lines 1-4; page 5, left column, lines 10-13; [0060], lines 9-16; [0104], from line 15 to “performed” in line 24 teaches the user device includes a memory for reading trained parameters of a neural network. A portion of a second neural network includes layers of the global model stored in memory of the user device.)
adapt the at least the portion of the second neural network based on the metadata associated with the pre-trained first neural network; and ([0104], lines 15-20)
perform an inference during deployment of the adapted second neural network. (Fig. 3 and [0089]-[0090]. The adapted second neural network is deployed on the target device because at least the personalization layer is deployed by the user device.)
Claims 27-30 and 32-33 each recites a product which implements the same features as the method of claims 17-20 and 22-23, respectively, and are therefore rejected for at least the same reasons.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 21 and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Mun et al. (US 20190088251 A1) in view of Bloom (US 20180336463 A1, cited in PTO-892 issued 09/18/2025).
Regarding claim 21, Mun teaches: The method of claim 16, wherein
However, Mun does not explicitly teach: the metadata is received as compressed metadata that comprises compression of the metadata associated with multiple layers of the first neural network or parameters of at least one tensor associated with the multiple layers of the first neural network, wherein the target device comprises a decoder, and wherein the method comprises decoding the compressed metadata at the decoder.
But Bloom teaches: the metadata is received as compressed metadata that comprises compression of the metadata ([0054], lines 1-19 and Fig. 5. The claim limitation of “metadata” is input data 540, “compressed metadata” is encoded data 550, and receiving compressed metadata is the second ML model component 530 receiving encoded data 550.)
wherein the target device comprises a decoder, and wherein the method comprises decoding the compressed metadata at the decoder. ([0054], lines 1-19 discloses decoder 514 comprises the layers of second ML model component 530)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have incorporated Bloom’s autoencoder into Mun. A motivation for the combination is to obscure data for transport and prevent an attacker from reconstructing the original data. (Bloom, [0022])
Claim 31 recites a product which implements the same features as the method of claim 21 and is therefore rejected for at least the same reasons.
Claims 24 and 34 are rejected under 35 U.S.C. 103 as being unpatentable over Mun et al. (US 20190088251 A1) in view of Sikka et al. (US 20210012212 A1, cited in PTO-892 issued 09/18/2025).
Regarding claim 24, Mun teaches: The method of claim 16, further comprising:
However, Mun does not explicitly teach: providing a portion of the metadata to a user on a user interface; and receiving an indication from the user in response to the portion of the metadata being provided, and wherein the adapting is performed based on the indication from the user.
But Sikka teaches: providing a portion of the metadata to a user on a user interface; and ([0073], lines 1-6 and [0074], lines 5-end, where “metadata” includes textual or graphical descriptions of a neural network.)
receiving an indication from the user in response to the portion of the metadata being provided, and wherein the adapting is performed based on the indication from the user. ([0075])
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have incorporated Sikka’s method for dynamically modifying a neural network into Mun. A motivation for the combination is that Sikka’s method gives a user finer control over how a neural network is modified. ([0075])
Claim 34 recites a product which implements the same features as the method of claim 24 and is therefore rejected for at least the same reasons.
Claims 25 and 35 are rejected under 35 U.S.C. 103 as being unpatentable over Mun et al. (US 20190088251 A1) in view of Alakuijala et al. (US 20190251444 A1, cited in PTO-892 issued 09/18/2025).
Regarding claim 25, Mun teaches: The method of claim 16, wherein the metadata indicates
However, Mun does not explicitly teach: an importance of one or more weights
But Alakuijala teaches: an importance of one or more weights (All of [0027] and [0067], lines 1-2)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have estimated an edge utility of Mun’s global model as part of the metadata. A motivation for the combination is to measure an amount by which the edge contributes to correct predictions provided by the neural network. (Alakuijala, [0027])
Claim 35 recites a product which implements the same features as the method of claim 25 and is therefore rejected for at least the same reasons.
Response to Arguments
Applicant’s argument with respect to the objection to the invention title have been fully considered and are persuasive. The objection has been withdrawn.
The following are Examiner’s responses to Applicant’s arguments filed 12/18/2025.
Applicant’s Arguments Under 35 U.S.C. 101: On page 8, Applicant argues that the claims cannot be performed in the human mind as metadata is received from a device on a network, at least a portion of a second neural network is obtained from memory at a target device. In the final paragraph on page 8, Applicant submits that the claim has not been considered as a whole including improvements, and Applicant lists a number of improvements.
Examiner’s Response: Applicant's arguments have been fully considered but they are not persuasive. Examiner agrees that receiving metadata from a device on a network and obtaining at least a portion of a second neural network from memory at a target device cannot be performed in the human mind. However, these limitations are not considered to be part of the mental process. They are analyzed as additional elements in Step 2A Prong 2.
In claim 16, the mental process is adapting the at least the portion of the second model based on the metadata associated with the pre-trained first neural network. Claim 17 recites said metadata comprises at least one indicator related to an importance of at least one weight of at least one layer in the first neural network or the second neural network. Claim 18 recites adapting the second neural network comprises pruning or dropping at least a portion of the second neural network based on the metadata. A traditional neural network can be represented as an array of nodes organized by layers, wherein a node in one layer is connected to a node in another layer by a weighted edge. A person can reasonably use importance scores for weights in a layer of the second neural network to identify the edge with the least important weight and adapt the second neural network, represented on paper or a computer, by pruning/deleting the edge having the least important weight. Claim 16 recites an abstract idea.
MPEP 2106.05(f) discloses that examiners may consider the following:
(1) Whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished. The recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it".
(2) Whether the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more.
Regarding the final paragraph on page 8, the target device and the device on the network each amounts to a generic computer component for applying the abstract ideas on a generic computer. These devices invoke computers merely as a tool to perform an existing process. Training of the pre-trained first neural network performed at the device amounts to mere instructions to apply the abstract ideas on a generic computer. The training recites only the idea of a solution or outcome because the claim fails to recite details of how the training is accomplished beyond traditional neural network training. Performing an inference (at the target device) during deployment of the adapted second neural network amounts to invoking computers merely as a tool to perform an existing process of data processing.
It is noted that the features upon which applicant relies (i.e., solving a rate distortion optimization problem and/or costing cost and related distortion; and using a smart TV, a mobile phone, a set-top box, an edge device, a cloud device) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Applicant’s Arguments Under 35 U.S.C. 103: On pages 9-10, Applicant argues that Alakuijala discloses the target device being the same as the device on the network, and thus does not teach the limitations of pending claim 16.
Examiner’s Response: Applicant’s arguments with respect to claim 16 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Asher H. Jablon whose telephone number is (571)270-7648. The examiner can normally be reached Monday - Friday, 9:00 am - 6:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar can be reached at (571)270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.H.J./Examiner, Art Unit 2127
/ABDULLAH AL KAWSAR/Supervisory Patent Examiner, Art Unit 2127