DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1, 3, 5-7, 9-10, 39-40, 42-45, and 48-53 are pending in this Office Action.
Response to Arguments
Applicant’s arguments filed in the amendment filed 10/27/2025, have been fully considered but they are moot in view of new grounds of rejections. The reasons set forth below.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Drawings
The formal drawings received on 02/07/2023 have been entered.
Claim Objections
Claim 1 objected to because of the following informalities: … provide location based services wherein a group of services that ae able to be selected for each computing device of the plurality of computing devices is determined based at least in part on a hardware configuration of a respective computing device. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3, 7, 10, 39, 40, 42, and 45 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mathews (US 20030060973) in view of Addepalli (US 20150264554), and further in view of Sullivan (US 20170024451) and Deutesfeld (US 20040261022).
1. Mathews teaches:
A distributed processing system for providing location based services, – in paragraph [0033] (Distributed navigation and automated route guidance using a plurality of networked computing devices, in which a computing device may host one or more navigation functional components.)
the distributed processing system comprising: a plurality of computing devices including at least one edge device and at least one cloud computing device arranged in a distributed processing system, – in paragraphs [0037]-[0045] (By defining the navigation system this way, and requiring that the navigation functional components communicate with each other using a distributed networking technology, the present invention provides the means to support various physical network and computing device configurations using the same software. The other navigation components are hosted by the vehicle server 330, which provides the processing and storage functions. A single computing device 302 hosting navigation system components 103, 105, 106, 107, 108 is another form of the LAN/WAN configuration, where the distributed networking technology used to implement the navigation function components provides for hosting on the same machine.)
each computing device comprising: – in paragraphs [0037]-[0045] (The other navigation components are hosted by the vehicle server 330, which provides the processing and storage functions. A single computing device 302 hosting navigation system components 103, 105, 106, 107, 108 is another form of the LAN/WAN configuration, where the distributed networking technology used to implement the navigation function components provides for hosting on the same machine.)
at least one processor; – in paragraphs [0037]-[0045] (The other navigation components are hosted by the vehicle server 330, which provides the processing and storage functions. A single computing device 302 hosting navigation system components 103, 105, 106, 107, 108 is another form of the LAN/WAN configuration, where the distributed networking technology used to implement the navigation function components provides for hosting on the same machine.)
at least one memory; – in paragraphs [0037]-[0045] (The other navigation components are hosted by the vehicle server 330, which provides the processing and storage functions. A single computing device 302 hosting navigation system components 103, 105, 106, 107, 108 is another form of the LAN/WAN configuration, where the distributed networking technology used to implement the navigation function components provides for hosting on the same machine.)
a core component, and – in paragraphs [0036]-[0045] (The navigation management component 106 provides the means to manage and encapsulate common navigation functions independent of the navigation guidance interface 105, including configuration and session management, event notification, and commonly used utilities.)
one or more services, – in paragraphs [0036]-[0045] (The guidance component 107 provides automated guidance features, wherein one or more navigation components receive guidance events regarding an NPO.)
wherein instances of the one or more services of one computing device are different than instances of the one or more services of a different computing device, and – in paragraphs [0036]-[0045] (The other navigation components (i.e., Guidance Component 107) are hosted by the vehicle server 330, which provides the processing and storage functions. A single computing device 302 hosting Guidance Component 107 is another form of the LAN/WAN configuration.)
wherein one or more of the one or more services are configured to provide location based services, and – in paragraphs [0036]-[0045] (The guidance component 107 generates guidance status information in response to changes in the information provide by the physical location sensor 103.)
Mathews does not explicitly teach:
at least one edge device and at least one cloud computing device; wherein at least one of the one or more services comprises a data store service configured to maintain a cache of partitions of mapped data local to one or more of the at least one edge device, wherein the core component of each respective computing device provides cache management and entity state management for the respective computing device.
However, Addepalli teaches:
at least one edge device and – in paragraphs [0058], [0059] (Communication system 10 can also provide intelligent caching of navigational map tiles from an Internet-based cloud and/or from peers (e.g., other OBUs) to enable a seamless user experience in which the user feels he/she has access to an entire map. The term `roadside infrastructure device` as used herein includes a base station, access point, satellite, and any other wireless device capable of establishing a network connection for exchanging packets between a user device, mobile device, or OBU and other networks such as the Internet. References to `infrastructure` are intended to include any roadside infrastructure device including local infrastructure devices, or any other device, component, element, or object capable of facilitating communication to nodes on other networks such as the Internet.)
at least one cloud computing device – in paragraphs [0058], [0242] (Communication system 10 can also provide intelligent caching of navigational map tiles from an Internet-based cloud and/or from peers (e.g., other OBUs) to enable a seamless user experience in which the user feels he/she has access to an entire map. Pre-rendered map tiles are typically pre-rendered from the vector data by servers in the cloud and chopped up into equally sized square images.)
wherein at least one of the one or more services comprises a data store service configured to maintain a cache of partitions of mapped data local to one or more of the at least one edge device, – in paragraphs [0052], [0059], [0187], [0242]-[0246] (OBU 30 may also include capabilities associated with navigation system 17 (e.g., a global positioning system (GPS), location-based service). Communication system 10 in which on-board units (OBUs) are configured to access map tiles from the Internet, from roadside infrastructure devices, and/or from other OBUs of other vehicles, and intelligently cache the map tiles to minimize network traffic when dynamically updating a location-based display. OBU 30 may act as a host/proxy of data/content, so that other OBUs can in some instances request and receive desired content locally within a vehicular ad hoc network rather than accessing the Internet. OBUs 30, 30a, and 30b in vehicular ad hoc network 530 may cache map tiles, for example, of a current geographic area, after receiving such map tiles from other OBUs, from a map provider in the Internet, or from a roadside infrastructure device.)
wherein the core component of each respective computing device provides cache management and entity state management for the respective computing device. – in paragraphs [0209], [0282], [0288], [0292] (Roadside infrastructure access module 741 can update location cache 734 by sending requests to RIID 722 based on current location and route information of vehicle 4, which could be obtained from navigation system 17. A current location of vehicle 4 can be provided by the vehicle's GPS, and a location (and other data) associated with nearby roadside infrastructure devices can be provided by RIID database 722.)
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Mathews with Addepalli to include at least one edge device and at least one cloud computing device; wherein at least one of the one or more services comprises a data store service configured to maintain a cache of partitions of mapped data local to one or more of the at least one edge device, wherein the core component of each respective computing device provides cache management and entity state management for the respective computing device, as taught by Addepalli, in paragraph [0003], to provide the ability to conduct transactions, including data exchanges, in vehicular network environments in a disruption tolerant manner, providing mobility support to devices inside a vehicle, enabling collaborative use of data from sensing devices in a vehicle, and optimizing retrieval and display of navigational maps in a vehicle.
Combination of Mathews and Addepalli does not explicitly teach:
wherein the core component of the respective computing device is configured to communicate with the one or more services of the respective computing device as well as with the core component of at least one other computing devices in order to share data and synchronize the core component.
However, Sullivan teaches:
wherein the core component of the respective computing device is configured to communicate with the one or more services of the respective computing device as well as with the core component of at least one other computing devices in order to share data and synchronize the core component, – in paragraphs [0032]-[0049], [0111]-[0114] (Each data-center database can synchronize to each other data-center database in a fully connected mesh. Within a datanet, the data can be replicated from an agent (e.g. an agent can be located on mobile-devices as well as IoT devices) to datanet datacenters and/or the subscribers in a minimal amount of network hops and/or in parallel. The datanet can support sharing sections of documents between different documents.)
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Mathews and Addepalli with Sullivan to include wherein the core component of the respective computing device is configured to communicate with the one or more services of the respective computing device as well as with the core component of at least one other computing devices in order to share data and synchronize the core component, as taught by Sullivan, in paragraph [0004], to provide a computerized technique for implementing Conflict-free Replicated Data Type (CRDT) arrays for improving data storage systems.
Combination of Mathews, Addepalli, and Sullivan does not explicitly teach:
wherein the one or services are selected from a group consisting of a stateful pipeline and a stateless microservice, wherein a group of services that are able to be selected for each computing device of the plurality of computing devices is determined based at least in part on a hardware configuration of a respective computing device.
However, Deutesfeld teaches:
wherein the one or services are selected from a group consisting of a stateful pipeline and a stateless microservice, – in paragraphs [0004]-[0053] (FIG. 1 depicts a process for dynamically selecting stateful or stateless software components. It can decide to call a stateless version of the component, since the overhead associated with maintaining a stateful component is not justified when such a minimal amount of calls are made.)
wherein a group of services that are able to be selected for each computing device of the plurality of computing devices is determined based at least in part on a hardware configuration of a respective computing device, – in paragraphs [0004]-[0053] (Stateless components have less system resource overhead since they are not in existence for long and do not store state information. On the other hand, stateful components have more system resource overhead since they retain state information.)
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Mathews, Addepalli, and Sullivan with Deutesfeld to include wherein the one or services are selected from a group consisting of a stateful pipeline and a stateless microservice, wherein a group of services that are able to be selected for each computing device of the plurality of computing devices is determined based at least in part on a hardware configuration of a respective computing device, as taught by Deutesfeld, in paragraphs [0001]-[0008], to dynamically select at runtime the type of component to invoke in order to implement a particular task.
3. The distributed processing system of Claim 1 – refer to the indicated claim for reference(s).
Deutesfeld teaches:
wherein the stateful pipeline comprises a plurality of computations configured to be performed in a sequential manner to generate a value maintained as a state of the stateful pipeline, and wherein the core component of a respective computing device that includes the stateful pipeline is configured to maintain the state of the stateful pipeline. – in paragraphs [0004]-[0053] (Stateful components have more system resource overhead since they retain state information, but the accompanying communication traffic may be reduced since the calling application does not have to keep re-sending duplicate information (e.g., parameter data) to the component every time it is called to perform a task.)
7. The distributed processing system of Claim 1 – refer to the indicated claim for reference(s).
Addepalli teaches:
wherein the core component of each computing device is configured to provide cache management of data for the one or more services. – in paragraphs [0209], [0282], [0288], [0292] (Roadside infrastructure access module 741 can update location cache 734 by sending requests to RIID 722 based on current location and route information of vehicle 4, which could be obtained from navigation system 17. A current location of vehicle 4 can be provided by the vehicle's GPS, and a location (and other data) associated with nearby roadside infrastructure devices can be provided by RIID database 722.)
10. The distributed processing system of Claim 1 – refer to the indicated claim for reference(s).
Sullivan teaches:
wherein core components of the plurality of computing devices are configured to share data having a conflict-free replicated data type (CRDT). – in paragraphs (Conflict-free replicated data type (CRDT) (can also be termed a ‘conflict-free replicated type’ or a ‘commutative replicated data type’) can be a data type whose operations commute when they are concurrent. CRDTs can be used to achieve strong eventual consistency and/or monotonicity (e.g. an absence of rollbacks). CRDTs can be used to replicate data across multiple computers of a network, executing updates with the need for remote synchronization.)
39. The distributed processing system of Claim 1, – refer to the indicated claim for reference(s).
Addepalli teaches:
wherein a state service of the core component of one or more of the at least one edge device upload updated properties according to policies for each attribute to the state service of the core component of one or more of the at least one cloud computing device. – in paragraphs [0209], [0282], [0288], [0292] (Roadside infrastructure access module 741 can update location cache 734 by sending requests to RIID 722 based on current location and route information of vehicle 4, which could be obtained from navigation system 17. A current location of vehicle 4 can be provided by the vehicle's GPS, and a location (and other data) associated with nearby roadside infrastructure devices can be provided by RIID database 722.)
40. Claim 40 is substantially similar to claim 1 except for the following: – refer to the indicated claim for reference(s).
Sullivan teaches:
wherein the plurality of computing devices share data having a conflict-free replicated data type whereby conflicts within shared data are resolved for consistency. – in paragraphs [0032]-[0049], [0111]-[0114] (Conflict-free replicated data type (CRDT) (can also be termed a ‘conflict-free replicated type’ or a ‘commutative replicated data type’) can be a data type whose operations commute when they are concurrent. CRDTs can be used to achieve strong eventual consistency and/or monotonicity (e.g. an absence of rollbacks). CRDTs can be used to replicate data across multiple computers of a network, executing updates with the need for remote synchronization.)
42. Claim 42 is substantially similar to claim 3.
45. Claim 45 is substantially similar to claim 7.
Claim(s) 5, 6, 43, and 44 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mathews (US 20030060973) in view of Addepalli (US 20150264554), and further in view of Sullivan (US 20170024451), Deutesfeld (US 20040261022), and Huang (US 20100153010).
5. The distributed processing system of Claim 1 – refer to the indicated claim for reference(s).
Combination of Mathews, Addepalli, Sullivan, and Deutesfeld does not explicitly teach:
wherein the one or more services comprise a routing microservice and a guidance microservice arranged as an application, and wherein the core component of a respective computing device is configured to provide an output of the routing microservice to the guidance microservice.
However, Huang teaches:
wherein the one or more services comprise a routing microservice and a guidance microservice arranged as an application, and wherein the core component of a respective computing device is configured to provide an output of the routing microservice to the guidance microservice. – in paragraphs [0028]-[0053] (A guidance module 230 can receive the route 226 from the routing module 220. The term "module" referred to herein can include software, hardware, or a combination thereof. For example, the software can be machine code, firmware, embedded code, and application software.)
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Mathews, Addepalli, Sullivan, and Deutesfeld with Huang to include wherein the one or more services comprise a routing microservice and a guidance microservice arranged as an application, and wherein the core component of a respective computing device is configured to provide an output of the routing microservice to the guidance microservice, as taught by Huang, in paragraphs [0001]-[0009], to fulfill a need that remains for a navigation system to efficiently utilize available information, and to facilitate rapid modifications to the information.
6. The distributed processing system of Claim 1 – refer to the indicated claim for reference(s).
Combination of Mathews, Addepalli, Sullivan, and Deutesfeld does not explicitly teach:
wherein the core component of a respective computing device is configured to communicate with the core component of another computing device in order to share data and coordinate execution of the stateful pipeline.
However, Huang teaches:
wherein the core component of a respective computing device is configured to communicate with the core component of another computing device in order to share data and coordinate execution of the stateful pipeline. – in paragraphs [0028]-[0053] (A guidance module 230 can receive the route 226 from the routing module 220. If the edge data 304 and the turn identification 306 of the query data 212 cannot generate guidance for a turn, the aggregate edges module 630 can generate the turn types 634 from the map data 210. Such a sequence represents processing the intersection in real-time.)
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Mathews, Addepalli, Sullivan, and Deutesfeld with Huang to include wherein the core component of a respective computing device is configured to communicate with the core component of another computing device in order to share data and coordinate execution of the stateful pipeline, as taught by Huang, in paragraphs [0001]-[0009], to fulfill a need that remains for a navigation system to efficiently utilize available information, and to facilitate rapid modifications to the information.
43. Claim 43 is substantially similar to claims 3, 5.
44. Claim 44 is substantially similar to claims 3, 6.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mathews (US 20030060973) in view of Addepalli (US 20150264554), and further in view of Sullivan (US 20170024451), Deutesfeld (US 20040261022), and Thakadu (US 20140237594).
9. The distributed processing system of Claim 1 – refer to the indicated claim for reference(s).
Combination of Mathews, Addepalli, Sullivan, and Deutesfeld does not explicitly teach:
wherein the core component of a respective computing device is configured to be responsive to one or more function calls from another core component, wherein function call received by the core component of the respective computing device is associated with a user token, and wherein the core component of the respective computing device is further configured to perform one or more operations on data in response to the function call within a secure area assigned exclusively to a user associated with the user token.
However, Thakadu teaches:
wherein the core component of a respective computing device is configured to be responsive to one or more function calls from another core component, wherein function call received by the core component of the respective computing device is associated with a user token, and wherein the core component of the respective computing device is further configured to perform one or more operations on data in response to the function call within a secure area assigned exclusively to a user associated with the user token. – in paragraphs [0004], [0016]-[0017], [0042]-[0044] (An API sandbox (see 201) is provided that may receive API calls from user devices, make a local and non-invasive copy of both the called API names and API parameters, and pass the API call to the original web service provider, e.g., provided that the intrusion detection system and/or security administrators authorize the passing of the API call to the original web service provider. In some embodiments, the API sandbox may be co-located at an enterprise software gateway. The API sandbox may be configured to receive all API calls from all applications, or only some API calls having specific API call names, or API calls only from a subset of applications (e.g., API calls originating from applications developed by specific application developers; applications of a specific type (e.g., games, business software, etc.), applications of a specific usage level, etc.), or other like subsets of API calls.)
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Mathews, Addepalli, Sullivan, and Deutesfeld with Thakadu to include wherein the core component of a respective computing device is configured to be responsive to one or more function calls from another core component, wherein function call received by the core component of the respective computing device is associated with a user token, and wherein the core component of the respective computing device is further configured to perform one or more operations on data in response to the function call within a secure area assigned exclusively to a user associated with the user token, as taught by Thakadu, in paragraphs [0002]-[0004], to provide a technique for API-level intrusion detection.
Claim(s) 48, 51 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mathews (US 20030060973) in view of Addepalli (US 20150264554), and further in view of Sullivan (US 20170024451), Deutesfeld (US 20040261022), and Bijor (US 20190037357).
48. The distributed processing system of Claim 1, – refer to the indicated claim for reference(s).
Combination of Mathews, Addepalli, Sullivan, and Deutesfeld does not explicitly teach:
wherein once a service is selected, an indication is provided to a user of whether the service is to be performed by the at least one edge device or the at least one cloud computing device.
However, Bijor teaches:
wherein once a service is selected, an indication is provided to a user of whether the service is to be performed by the at least one edge device or the at least one cloud computing device. – in paragraphs [0009]-[0074] (The computer system can receive service provider locations 684 of service providers operating throughout the given region, and the processor can execute the selection instructions 622 to select an optimal service provider from a set of available service providers, and transmit a service invitation 652 to enable the service provider to accept or decline the ride service offer.)
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Mathews, Addepalli, Sullivan, and Deutesfeld with Bijor to include wherein once a service is selected, an indication is provided to a user of whether the service is to be performed by the at least one edge device or the at least one cloud computing device, as taught by Mathews, in paragraphs [0002]-[0034], to provide navigation and automated guidance to a mobile user.
51. Claim 51 is substantially similar to claim 48.
Claim(s) 49, 50, 52, 53 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mathews (US 20030060973) in view of Addepalli (US 20150264554), and further in view of Sullivan (US 20170024451), Deutesfeld (US 20040261022), Bijor (US 20190037357), and Acar (US 20210297504).
49. The distributed processing system of Claim 48, – refer to the indicated claim for reference(s).
Combination of Mathews, Addepalli, Sullivan, Deutesfeld, and Bijor does not explicitly teach:
wherein an indication is provided to a user associated with at least one of hardware usage or data consumption by a selected one of the at least one edge device or the at least one cloud computing device.
However, Acar teaches:
wherein an indication is provided to a user associated with at least one of hardware usage or data consumption by a selected one of the at least one edge device or the at least one cloud computing device. – in paragraphs [0003]-[0066] (Cloud computing platforms provide users and enterprise customers with a variety of compute services. For example, an Infrastructure-as-a-Service (IaaS) platform may provision virtual server instances and deploy applications on those instances. Further, users can create, deploy, and terminate instances as needed, e.g., in response to ongoing demand by individuals accessing the application. Further still, a cloud computing platform may provide auto-scaling services—automating the creation of additional instances (and similarly, termination of those instances) by monitoring resource utilization metrics of the virtual servers of the user and provisioning instances if the metrics trigger specified conditions. For example, the cloud computing platform may detect that CPU utilization of a virtual server cluster exceeds a threshold specified by the user. In response, the cloud computing platform may provision additional instances to the virtual server cluster according to a policy set by the user. As a result, the virtual server cluster can efficiently adapt to changes in network traffic and resource load with minimal effort on the part of the user.)
It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Mathews, Addepalli, Sullivan, Deutesfeld, and Bijor with Acar to include wherein an indication is provided to a user associated with at least one of hardware usage or data consumption by a selected one of the at least one edge device or the at least one cloud computing device, as taught by Acar, in paragraphs [0002]-[0021], to configure an application stack executing in the cloud computing environment in response to changes in resource demand..
50. The distributed processing system of Claim 49, – refer to the indicated claim for reference(s).
Acar teaches:
wherein an indication is provided to the user including an option to alter the selected one of the at least one edge device or the at least one cloud computing device based on the at least one of hardware usage or data consumption. – in paragraphs [0003]-[0066] (Cloud computing platforms provide users and enterprise customers with a variety of compute services. For example, an Infrastructure-as-a-Service (IaaS) platform may provision virtual server instances and deploy applications on those instances. Further, users can create, deploy, and terminate instances as needed, e.g., in response to ongoing demand by individuals accessing the application. Further still, a cloud computing platform may provide auto-scaling services—automating the creation of additional instances (and similarly, termination of those instances) by monitoring resource utilization metrics of the virtual servers of the user and provisioning instances if the metrics trigger specified conditions. For example, the cloud computing platform may detect that CPU utilization of a virtual server cluster exceeds a threshold specified by the user. In response, the cloud computing platform may provision additional instances to the virtual server cluster according to a policy set by the user. As a result, the virtual server cluster can efficiently adapt to changes in network traffic and resource load with minimal effort on the part of the user.)
52. Claim 52 is substantially similar to claim 49.
53. Claim 53 is substantially similar to claim 50.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MUHAMMAD RAZA whose telephone number is (571)272-7734. The examiner can normally be reached Monday-Friday, 7:00 A.M.-5:00 P.M..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivek Srivastava can be reached on (571)272-7304. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MUHAMMAD RAZA/Primary Examiner, Art Unit 2449