DETAILED ACTION
This Office Actions is in response to communication (Amendment) filed on 10/28/2025.
Claims 1 – 20 are pending. Claims 1, 8, and 15 are in independent form. Claims 1 – 20 were amended. This action is Final
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This Office Action is in response to the applicant’s remarks and arguments filed on 10/28/2025.
Claims 1 – 20 were amended. Claims 1 – 20 remain pending in the application. Claims 1 – 20 are being considered on the merits.
The rejection of claims 1 – 20 under 35 U.S.C. § 101 has been withdrawn due to the amendment to the claims filed on 10/28/2025.
Response to Arguments
The applicant’s remarks and/or arguments, filed on 10/28/2025 have been fully considered with the following result(s).
The examiner is entitled to give claim limitations their broadest reasonable interpretation in light of the specification. See MPEP 2111 [R-1] Interpretation of Claims-Broadest Reasonable Interpretation. The applicant always has the opportunity to amend the claims during prosecution, and broad interpretation by the examiner reduces the possibility that the claim, once issued, will be interpreted more broadly than is justified. In re Prater, 162 USPQ 541,550-51 (CCPA 1969).
Response to Claims Objection
Applicant’s argument filed on 10/28/2025 regarding the Claims Objection has been fully considered and they are persuasive. The previous Claims Objection has been withdrawn.
Response to 35 U.S.C § 101 Remarks
Applicant’s argument filed on 10/28/2025 regarding the 35 U.S.C § 101 rejection has been fully considered and they are persuasive. Regarding the remark “Here, the specification describes improvements, for example, "shares only the classification data for the classification associated with the highest confidence value avoid excessive network traffic". See para. [0093]. Additionally, the claims "address and solve the above-described problems and other problems related to the accuracy of compressed machine-learning models." See para. [0029]. The steps of “causing the processor to transmit through the interface... transmitting the highest confidence value data reduces network traffic of the edge computing environment" recited by independent claims 1, 8, and 15 reflect these improvements.”;
The examiner agrees that the limitation “wherein transmitting the highest confidence value data reduces network traffic of the edge computing environment” would reflect the technical improvements in the technology. The 35 U.S.C § 101 rejection has been withdrawn.
Response to 35 U.S.C § 103 Remarks
Applicant's arguments in the applicant’s remarks and amendments of independent claims 1, 8 and 15, found on pages 18 – 21 and filed on 10/28/2025, have been fully considered and are persuasive. Therefore, the previous claim(s) rejection under 35 U.S.C § 103 has been withdrawn.
However, upon further consideration, a new ground(s) of rejection is made in view of a newly found prior art (Crandall et al. US Pub. No. US 20210390408 A1 (hereafter Crandall)) and in view of the previously cited prior art(s). Reference Crandall, in combination with previously cited prior art(s), discloses each element of the claims highlighted by applicant.
For further details, please see below claims rejections under 35 U.S.C § 103.
Claim Objections
Claims 1, 8, and 15 are objected to because of the following informalities:
Claim 1: data corresponding to a reference classification dataset wherein the data includes a reference classification and reference confidence value data
Claim 8: data corresponding to a reference classification dataset from the identified nodes, wherein the reference classification dataset includes a reference classification and reference confidence value data.
Claim 15: data corresponding to a reference classification dataset wherein the data includes a reference classification and reference confidence value data.
The examiner suggests to amend the underline limitations of claims 1 and 15 to be similar to claim 8.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION. —The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 3, 8 – 14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 3 and 10 recites the limitation "the node" in “wherein the node is selected through a load-balancing queue”. There is insufficient antecedent basis for this limitation in the claim. It is unclear which “node” that the limitation “the node” is referring to.
Claim 8 recites the limitation "the identified node" in “receiving code configured to cause the one or more computer processors to receive, by an interface of a first machine learning node of the identified set of machine learning nodes, data corresponding to a reference classification dataset from the identified nodes”. There is insufficient antecedent basis for this limitation in the claim. It is unclear which “node” that the limitation “the identified node” is referring to.
Remaining dependent claims 9 – 14 are also rejected due to their dependency on the rejected independent claim 8.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4, 5, 8, 11, 12, 15, 18, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Greschler et al. US Pub. No. US 20210271217 A1 (hereafter Greschler), in further view of Shu US Pub. No. US 20190034757 A1, Hardie et al. US Pat. No. US 10482904 B1 (hereafter Hardie), and Crandall et al. US Pub. No. US 20210390408 A1 (hereafter Crandall)
Regarding claim 1, Greschler teaches the invention substantially as claimed: A method of resource allocation for sensor-based neural networks, executable by a processor, enhancing an edge computing environment of machine learning nodes, comprising: identifying an identified set of nodes associated with an edge computing environment; (e.g. FIG. 2, 3, 4, and [0006]: “The data captured may be processed on the sensor (known as Edge Computing) or sent to a storage and processing system (server or cloud storage) for processing using a combination of image, video, audio, object detection, facial recognition, or other machine learning models”) The citations disclose at FIG. 2, 3, 4 and [0006] an edge computing system comprise multiple data capture sensor/nodes, which can process captured data. The teaching of Greschler does not clearly indicate the “nodes” are machine learning node. The machine learning node will be taught by Crandall, as discuss below.
Receiving, by an interface of a first node of the identified set of nodes, data corresponding to a reference classification dataset (FIG. 1, FIG. 6, FIG. 13 and [0043]: “Sensing Device 601 may collect data, including profile data, pertaining to the individuals or customers in the place of business and may compare the collected profile data with data stored in Database 1301. For example, the collected data may include facial recognition data to identify the individual or customer in the place of business, and the stored data may include previous customers in the place of business. In this example, the system may determine if the current customer is a repeat customer or a new customer. If a match is not identified, then the current customer may be determined not to be a repeat customer, and the database may be updated to include a record of the current customer. If a match is identified, then the current customer may be determined to be a repeat customer, and the database may be updated to include the current visit by the repeat customer.”) The citation discloses an example of the sensing device can collect and analyze customer data to identify whether the customer is a new or repeat customer, and transmit the analyzed data to the database for updating accordingly. The data generated by “compare the collected profile data” and update the database accordingly would be consider as classification dataset, as it is used for classifying the type of customer.
selecting a selected node from among the identified set of nodes based on the selected node having a greatest confidence interval associated with the reference classification within the reference confidence value data; (e.g. FIG. 4 and [0033]: “In addition, values of distance d from each of additional Sensing Devices 401, 403-407 may be lower than the value obtained from Sensing Device 402. In this example, the maximum value of distance d as observed by Sensing Device 402 may be determined to be the most accurate measurement of distance d between the individuals because this observed value of distance d may be collected from a point that may provide the most accurate value for the distance between the individuals.”) The citation discloses the device 402 is determined/selected, because it has the most accurate measurement of distance d between other devices.
Greschler fails to teach machine learning node; wherein the data includes a reference classification and reference confidence value data; causing the processor to transmit through the interface of the first machine learning node, a highest confidence value data generated by the first machine learning node, to a second machine learning node of the identified set of machine learning nodes wherein transmitting the highest confidence value data reduces network traffic of the edge computing environment; ...... and assigning the selected node to process the classification dataset.
However, Crandall teaches machine learning node (e.g. FIG. 4, [0038]: “FIG. 4 shows a diagram of an exemplary CDL system in accordance with an embodiment of the present disclosure. FIG. 4 shows 3 devices 402 (which can function as CDL nodes) that use CDL to learn a model on a data set that is distributed over several computational nodes in a decentralized manner. While 3 devices are shown in FIG. 4, it should be understood that any number of devices can be used in a CDL system in accordance with embodiments of the present disclosure. Devices 402 can communicate with each other via respective communications devices 414 and communications buffers 408.” and [0015]: “Embodiments of the present disclosure provided systems and methods for consensus driven learning (CDL) using machine learning (ML) to enable devices to learn a model on a data set that is distributed over several computational nodes in a decentralized manner.”) The citation discloses at FIG. 4 and [0038] that device 402 can operate as CDL node, and at [0015] discloses the CDL using machine learning.
causing the processor to transmit through the interface of the first machine learning node, a data generated by the first machine learning node, to a second machine learning node of the identified set of machine learning nodes (e.g. FIG. 4 and [0041]: “Using the CDL methods described above, each device 402 can generate a local model and can send updates to the other devices as it gathers data. Using these updates, each device can update its local model as described above, and devices 402 can come to a consensus regarding the data being measured (e.g., atmospheric data, sediment data, wind data, water flow data, etc.)” and [0046]: “In an embodiment, whenever a device (e.g., device 403a) sends an update, it can send a partial update instead of sending a full update containing information from its local model (e.g., local model 406a). For example, in an embodiment, device 403a can be configured to send only a portion of its parameters as an update. In an embodiment, whenever a device (e.g., device 403a) sends another update, it can send another portion of its parameters as a second update.”) The citation discloses at FIG. 4 and [0041] that each device 402 can communicate and send a portion of its parameters/data with each other. The teaching of Crandall does not teach the data would be the “highest confidence value data”. However, this limitation is taught by Hardie (e.g. 42-Col 12, lines 28 – 40). Therefore, by combining the teaching of Crandall that teaches the devices/nodes can send a portion of the updates data between the devices, with the teaching Hardie that the data is the “highest confidence value data”, one with ordinary skills in the art would be able to come up with the claim invention.
wherein transmitting the highest confidence value data reduces network traffic of the edge computing environment (e.g. [0020]: “By sending partial updates, embodiments of the present disclosure can reduce the amount of bandwidth required to transfer data between nodes.”)
It would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to add the machine learning node; causing the processor to transmit through the interface of the first machine learning node, a highest confidence value data generated by the first machine learning node, to a second machine learning node of the identified set of machine learning nodes wherein transmitting the highest confidence value data reduces network traffic of the edge computing environment, as taught in Crandall’s invention into Greschler’s invention because the additional features would enable nodes independently analyze data and generate confidence values on it owns, and only transferring the highest confidence value data would help the whole system lowers network traffic while still enabling accurate and efficient resource allocation, and improving efficiency and reliability.
However, Shu teaches wherein the data includes a reference classification and reference confidence value data (e.g. FIG. 2, 3, 4, and [0039]: “For example, the image 210 is most likely an indoor swimming pool, and the trained attribute identification system 104 on the mobile device 102 can quickly recognize the image 210 with a list 220 of possible attributes (indoor swimming pool, water park, outdoor swimming pool . . .) with respective confidence level (30.6%, 18.1%, 10.3% . . .).”) the citation discloses the list of possible attributes/reference classification and the confidence level/reference confidence value data.
It would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to add the wherein the data includes a reference classification and reference confidence value data, as taught in Shu’s invention into Greschler’s invention because by including reference classification and confidence value in the dataset would help to reduces errors from uncertain predictions and ensures that the tasks are assigned to the node most likely produces correct results.
However, Hardie teaches and assigning the selected node to process the classification dataset. (e.g. 42-Col 12, lines 28 – 40: “In some examples, the remote speech processing service 110 may determine whether the final weighted confidence scores are higher than a threshold confidence score, and select the speech interface device 108 whose weighted confidence score is higher than the threshold to respond to the command. In other examples, the remote speech processing service 110 may simply compare the weighted confidence scores with each other and select the speech interface device 108 with the highest weighted confidence score to respond to the speech utterance. In the example of FIG. 1, the speech processing service 110 may determine that the speech interface device 108A is the device to respond to the command in the speech utterance 106 to turn off the alarm.”) the citation discloses the concept of selecting a device/node, that has the highest weighted confidence score to respond/process, to the speech utterance/ the classification dataset.
It would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to add the and assigning the selected node to process the classification dataset, as taught in Hardie’s invention into Greschler’s invention because this would enable the system not only identifies the most reliable node, but also directly uses it to perform the tasks, which improves system efficiency, strengthens the resources allocation process, and ensures that the tasks are performed accurate and consistent results.
Regarding claim 4, Greschler, in view of Shu, Crandall, and Hardie, discloses the method of claim 1, and Greschler further teaches wherein the selected machine learning node is selected based on a physical location associated with the selected machine learning node (e.g. FIG. 4 and [0033]: “In this example, the maximum value of distance d as observed by Sensing Device 402 may be determined to be the most accurate measurement of distance d between the individuals because this observed value of distance d may be collected from a point that may provide the most accurate value for the distance between the individuals.”) the citation discloses the sensing device 402 is determined/selected, because it has the most accurate value of distance d because of its location.
Regarding claim 5, Greschler, in view of Shu, Crandall, and Hardie, discloses the method of claim 4, and Greschler further teaches wherein the physical location corresponds to a relative location of the selected machine learning node in relation to other nodes from among the identified set of machine learning nodes. (e.g. FIG. 4 and [0033]: “In this example, the maximum value of distance d as observed by Sensing Device 402 may be determined to be the most accurate measurement of distance d between the individuals because this observed value of distance d may be collected from a point that may provide the most accurate value for the distance between the individuals.”) the citation discloses the sensing device 402 is determined/selected, because compare with other sensing device 401 and 403-407, the value of distance d is the most accurate when obtained from sensing device 402.
Regarding claims 8, the claims are computer system claims having similar limitations cited in claim 1, and Crandall also teaches the additional limitation causing the one or more computer processors to replace a classification dataset of the first machine learning node in memory with the reference classification dataset (e.g. [0041]: “each device 402 can generate a local model and can send updates to the other devices as it gathers data. Using these updates, each device can update its local model as described above, and devices 402 can come to a consensus regarding the data being measured (e.g., atmospheric data, sediment data, wind data, water flow data, etc.)” and [0043]: “devices 403a, 403b, and 403c can train respective local models 406a, 406b, and 406c and can send updates to (and receive updates from) each other to obtain more accurate data using distributed learning.”) The citation discloses the devices 402 can send and receive updates of the data from and to each other. Since the received data is the updates data, it would imply that the old data is replaced with the updates data.
It would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to add the causing the one or more computer processors to replace a classification dataset of the first machine learning node in memory with the reference classification dataset, as taught in Crandall’s invention into Greschler’s invention because the addition features would ensure the nodes work with the most accurate and latest information, which helps the system reduces errors, more efficient resource usage, and more reliable classification results of the system.
Regarding claims 11 and 12, the claims are computer system claims having similar limitations cited in claims 4 and 5, so they are also rejected under the same rational.
Regarding claims 15, 18, and 19, the claims are non-transitory computer readable medium claims having similar limitations cited in claims 1, 4, and 5, so they are also rejected under the same rational.
Claims 2, 9, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Greschler, Shu, Crandall, and Hardie, in further view of Vora al. US Pat. No. US 11258671 B1 (hereafter Vora) and LIU al. US Pub. No. US 20160026675 A1 (hereafter LIU)
Regarding claim 2, Greschler, in view of Shu, Crandall, and Hardie, discloses the method of claim 1, and Greschler further teaches wherein the selected machine learning node is selected based on ...... and current confidence value data for the selected machine learning node. (e.g. FIG. 4 and [0033]: “In this example, the maximum value of distance d as observed by Sensing Device 402 may be determined to be the most accurate measurement of distance d between the individuals because this observed value of distance d may be collected from a point that may provide the most accurate value for the distance between the individuals.”) the citation discloses the selection of Sensing Device 402 because it has the most accurate value/current confidence value.
Greschler, in view of Shu, Crandall, and Hardie fails to teach historical confidence interval data for the selected machine learning node associated with the reference classification, historical identification data associated with the selected machine learning node.
However, LIU teaches wherein the node is selected based on ......... historical confidence interval data for the selected machine learning node associated with the reference classification ([0010]: “In a first possible implementation manner of the first aspect, before the determining, by the coordinator, whether the service data is within a set data confidence interval, the method further includes: determining, by the coordinator, the data confidence interval according to historical data having a service type the same as a service type of the service data.”) The citation discloses the data confidence interval according to historical data/ historical confidence interval data.
However, Vora teaches wherein the node is selected based on ...... historical identification data associated with the selected machine learning node (e.g. 164-Col 39, lines 61 – 67 and Col 40, lines 1 – 4: “Additionally, or alternatively, the process 900 may include storing functionality-management data indicating that, for the functionality, the first device has previously been selected as the primary device and/or that the second device has previously been selected as a secondary device. Selection, identification, and/or determination of a given device as a primary device may be performed based at least in part on the data and/or analyses described with respect to block 906, above. Data indicating such past selections, identifications, and/or determinations may be stored and utilized as a factor for selecting, identifying, and/or determining primary devices.” And 155-Col 37, lines 32 – 46: “At block 906, the process 900 may include determining the first device is to be a primary device used to perform the functionality. For example, a device-usage component may be configured to analyze historical usage data associated with the devices to determine which device is most favorable for performing a given function, such as wake-word detection, that is common among at least two of the devices. For example, the usage data may indicate one or more of prior usage patterns of the devices, energy consumption of the devices, wake-word detection false-positive rates, device placement within an environment, device modality, and/or user preference data. Some or all of this data may be analyzed by the device-usage component to determine which device to identify as the primary device for a given functionality.”) The citation discloses the selection of a device/node, based on the wake-word detection false-positive rates/historical identification data.
It would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to add the historical confidence interval data for the selected machine learning node associated with the reference classification, historical identification data associated with the selected machine learning node, as taught in LIU and Vora’s invention into Greschler, Shu, Crandall, and Hardie’s invention because the addition conditions for selecting a node would help to improve the reliability of node selection, allow the system to ensure that the tasks are consistently assigned to nodes with strong performance record, accuracy and stability over time.
Regarding claim 9, it is a system claim having similar limitations cited in claim 2, so it is also rejected under the same rational.
Regarding claim 16, it is a non-transitory computer readable medium claim having similar limitations cited in claim 2, so it is also rejected under the same rational.
Claims 3, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Greschler, Shu, Crandall, and Hardie, in further view of Carter al. US Pub. No. US 20200342049 A1 (hereafter Carter)
Regarding claim 3, Greschler, in view of Shu, Crandall, and Hardie, discloses the method of claim 1, but fails to teach wherein the node is selected through a load-balancing queue and manager based on determined that a machine learning node is better at servicing a given operation than other machine learning nodes from among the identified set of machine learning nodes.
However, Carter teaches wherein the node is selected through a load-balancing queue and manager based on determined that a machine learning node is better at servicing a given operation than other machine learning nodes from among the identified set of machine learning nodes. (e.g. [0051]: “In a particular aspect, a load balancer, in response to determining that the message queue includes a request for the web format service 186, sends a message to a server to instantiate the web format service 186. The load balancer may select a specific server to instantiate a particular service based on various criteria. For example, the criteria under which a server is selected to instantiate a service may include: 1) what server is next on a list of available servers, 2) that a load of a server is less than a first threshold, 3) that a load of a different server already executing an instance of the service is greater than a second threshold, 4) that no servers are currently executing the service, or a combination thereof. A server's load may be determined, for example, based on a number of queued requests for the server, a number of requests being processed by the server, etc.”) The citation discloses the load balancer and a message queue/load-balancing queue, select a server/node, based on various criteria. Since the selected server meets the criteria, it implies that the selected server is better at servicing a given operation than other servers.
It would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to add the wherein the node is selected through a load-balancing queue and manager based on determined that a machine learning node is better at servicing a given operation than other machine learning nodes from among the identified set of machine learning nodes, as taught in Carter’s invention into Greschler, Shu, Crandall, and Hardie’s invention because this would ensure that the tasks are directed to the most suitable nodes, which helps to improve the efficiency and fairness in resource allocation.
Regarding claim 10, it is a system claim having similar limitations cited in claim 3, so it is also rejected under the same rational.
Regarding claim 17, it is a non-transitory computer readable medium claim having similar limitations cited in claim 3, so it is also rejected under the same rational.
Claims 6, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Greschler, Shu, Crandall, and Hardie, in further view of Sharmaet al. US Pub. No. US 20230161631 A1 (hereafter Sharma)
Regarding claim 6, Greschler, in view of Shu, Crandall, and Hardie, discloses the method of claim 1, but fails to teach wherein the selected machine learning node is selected based on a current use of processing resources associated with other machine learning nodes from among the identified set of machine learning nodes.
However, Sharma teaches wherein the selected machine learning node is selected based on a current use of processing resources associated with other machine learning nodes from among the identified set of machine learning nodes. (e.g. FIG. 2 and [0086]: “Custom scheduler 216 may schedule pods by selecting a worker node of worker nodes 204A-204N to receive the pod. For pods having a pod specification that indicates custom scheduler 216 is to be used for scheduling, custom scheduler 216 may select a worker node based on matching the pod characteristics to the profiles and current resource usage of the worker nodes.”) The citation discloses the selection of a worker node based on current resource usage of the worker node.
It would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to add the wherein the selected machine learning node is selected based on a current use of processing resources associated with other machine learning nodes from among the identified set of machine learning nodes, as taught in Sharma’s invention into Greschler, Shu, Crandall, and Hardie’s invention because the additional features would ensure that the system avoids overloading busy nodes, improve overall performance and balancing the workload across the nodes, and more efficient use of computing resources.
Regarding claim 13, it is a system claim having similar limitations cited in claim 6, so it is also rejected under the same rational.
Regarding claim 20, it is a non-transitory computer readable medium claim having similar limitations cited in claim 6, so it is also rejected under the same rational.
Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Greschler, Shu, Crandall, and Hardie, in further view of Khandelwal al. US Pub. No. US 20140189130 A1 (hereafter Khandelwal)
Regarding claim 7, Greschler, in view of Shu, Crandall, and Hardie, discloses the method of claim 1, but fails to teach further comprising assigning additional machine learning nodes from among the identified set of machine learning nodes to process the reference classification dataset.
However, Khandelwal teaches further comprising assigning additional machine learning nodes from among the identified set of machine learning nodes to process the reference classification dataset. (e.g. [0033]: “However, during observed high usage times, additional nodes may need to be assigned to interface and storage roles to provide greater infrastructure to support the increased demand.”) The citation discloses the concept of assigning additional nodes to support the increased demand.
It would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to add the further comprising assigning additional machine learning nodes from among the identified set of machine learning nodes to process the reference classification dataset, as taught in Khandelwal’s invention into Greschler, Shu, Crandall, and Hardie’s invention because the additional node would improve the overall system reliability, efficiently, and speed to process large and complex dataset, as multiple nodes can works together.
Regarding claim 14, it is a system claim having similar limitations cited in claim 7, so it is also rejected under the same rational.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Examiner has cited particular columns/paragraphs/sections and line numbers in the references applied and not relied upon to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner.
When responding to the Office action, applicant is advised to clearly point out the patentable novelty the claims present in view of the state of the art disclosed by the reference(s) cited or the objections made. A showing of how the amendments avoid such references or objections must also be present. See 37 C.F.R. 1.111(c).
When responding to this Office action, applicant is advised to provide the line and page numbers in the application and/or reference(s) cited to assist in locating the appropriate paragraphs.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUAN M NGUYEN whose telephone number is (703)756-1599. The examiner can normally be reached Monday-Friday: 9:30am - 5:30PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Vital can be reached on (571) 272-4215. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Tuan M Nguyen/
Examiner, Art Unit 2198
/PIERRE VITAL/Supervisory Patent Examiner, Art Unit 2198