Prosecution Insights
Last updated: April 19, 2026
Application No. 18/384,399

METHOD AND SYSTEM FOR PROVIDING RECOMMENDATIONS FOR INDOOR NAVIGATION

Non-Final OA §101§103§112
Filed
Oct 27, 2023
Examiner
PALMARCHUK, BRIAN KEITH
Art Unit
3669
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Hcl Technologies Limited
OA Round
3 (Non-Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
8 granted / 10 resolved
+28.0% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
32 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
15.6%
-24.4% vs TC avg
§103
47.2%
+7.2% vs TC avg
§102
18.4%
-21.6% vs TC avg
§112
18.9%
-21.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office Action is in response to the Applicants’ filing on October 27, 2025. Claims 1-20 were previously pending, of which claims 1, 9 and 17 have been amended, and claims 4, 12 and 20 have been cancelled. Accordingly, claims 1-3, 5-11 and 13-19 are currently pending and are being examined below. Response to Arguments With respect to Applicant's remarks, see pages 8-34 filed March 12, 2026; Applicant’s “Amendment and Remarks” have been fully considered. Applicant’s remarks will be addressed in sequential order as they were presented. With respect to the U.S.C. § 101 rejection, applicant's arguments have been fully considered but they are not persuasive. Therefore the rejection is maintained. With respect to the U.S.C. § 103 rejections, applicant's arguments have been fully considered and they are persuasive. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., assigning location coordinate tags of static objects to corresponding cameras for each location of a plurality of locations within the closed premises) is not clearly defined in the prior art by combination of Vaughn and Desai so the rejection is withdrawn. An updated search and consideration was performed, Jadhav et al., US 2021/0231440 A1 from the applicant’s IDS filed 10/27/2023 was found to teach the tagging of static objects (landmarks) locations as part of user route guidance from camera images/tracking. Therefore, an updated rejection is provided in the Office Action below; as the static tagging was present in (now cancelled) claims 4, 12, and 20 in the originally examined claim set and has been directly integrated into the independent claims, the updated grounds of rejection was not necessitated solely based on the amendments, correspondingly new grounds of rejection in view of the prior art which appears below is made non-final to allow the applicant proper opportunity to respond. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1, 9 and 17 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The limitation of “assigning location coordinate tags of static objects to corresponding cameras for each location of a plurality of locations within the closed premises” is unclear on how the corresponding cameras relate to the at least one camera presented earlier in the claim or if they are different cameras. Claims 2-3, 5-8, 10-11, 13-16 and 18-19 are rejected under 35 U.S.C. 112(b) as being dependent on rejected claim 14 and for failing to cure the deficiencies listed above. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 5-11 and 13-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The Examiner has identified method, system, and apparatus Claims 1, 9, and 17 as the claims that represents the claimed invention for analysis. Claims 1, 9, and 17 recite the limitations of (capturing, comparing, and identifying): A method of identifying a current location of a user in a closed premises, the method comprising: capturing, by a server via at least one camera communicatively coupled to the server, user information associated with the user and object information associated with one or more static objects in proximity to the user, when the user and the one or more static objects are within a Field of View (FOV) of the at least one camera, wherein the user information comprises primary user data and secondary user data, and wherein the primary user data is stored in a form of hash value; assigning, location coordinate tags of static objects to corresponding cameras for each location of a plurality of locations within the closed premises; comparing, by the server, the user information with historical user information corresponding a plurality of users visited the closed premises, and the object information with historical object information corresponding to a plurality of static objects within the closed premises; and identifying, by the server, the current location of the user within the closed premises based on the comparing. which is a process that, under its broadest reasonable interpretation, covers performance of the limitation(s) as a Mental process (concept performed in the human mind) but for the recitation of generic computer elements. For example, a person could mentally obtain user information, compare the user information with historical user information, and identify the current location of the user. With respect to Step 2A, Prong II, this judicial exception is not practically integrated. The claim recites the additional elements of “server” and “camera” multiple times. These elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Accordingly, these elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. With respect to Step 2B, the aforementioned additional elements are all generic computer elements have been held to be not significantly more than the abstract idea by Alice. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, the additional elements of using the processors to receive information, make decisions, and supply instructions amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Claims 2-3, 5-8, 10-11, 13-16 and 18-19 cite further abstract language and characteristics that define the abstract ideas presented in the independent claims. These limitations do not integrate the abstract idea into a practical application. Therefore, the claims are also rejected under 35 USC § 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-11 and 13-19 are rejected under 35 U.S.C. 103 as being unpatentable over Vaughn et al, 2017/0372223 Al (Hereinafter, “Vaughn”) in view of Jadhav et al., US 2021/0231440 A1 (Hereinafter “Jadhav”)(IDS) in further view of Desai et al., US 10,147,210 B1 (Hereinafter, “Desai”). Regarding Claims 1, 9, and 17 Vaughn recites a system for identifying a current location of a user in a closed premises, the system (100) comprising: a processor (102); a memory (104) communicatively coupled to the processor; wherein the memory stores processor-executable instructions; See Fig. 1. wherein the user information comprises primary user data and secondary user data, and wherein the primary user data is stored in a form of hash value; In [0060], “Capturing/sensing component(s) 231 and/or I/O component (s) 263 may further include voice recognition devices, photo recognition devices, facial and other body recognition components, voice-to-text conversion components, etc.”. Also, in [0038], “privacy engine 209 capable of keeping anonymous the "token" that identifies a user, such as user A, from their computing device, such as computing device 250A (e.g., smartphone, smart watch, radio frequency identification (RFID) tag, etc.)”. Examiner note: “hash value” equivalent information is discussed in [0040] as a unique identifier to anonymize the user info. compare the user information with historical user information corresponding to a plurality of users that visited the closed premises, See [0048], “Once any behavioral data, application data, and/or location data relating to user A has been gathered from computing device 250A, this data may then be processed from a crowd-sourced view”. and the object information with historical object information corresponding to a plurality of static objects within the closed premises; In [0048], “learning engine 211 may consider various scenarios as to the identified location based on the available data, such as behavioral data, to determine whether the location is, for example, a common living area, a kitchen, a bathroom, a collaboration room, a conference or meeting room, etc.”. Also, in [0054], “plotting of a map, as facilitated by map building logic 213, may be performed automatically and dynamically based on the changing local environment as detected by data access logic 253”. Vaughn discloses the use of imaging to capture user information and location identification, but does not explicitly disclose coordinate tagging of static objects. However, Jadhav teaches: assign location coordinate tags of static objects to corresponding cameras for each location of a plurality of locations within the closed premises; as part of a user route suggestions and guidance system for interior spaces See [0005-0006] “, by the one or more hardware processors, a destination within the facility from the user; estimate, using a surrounding recognition machine learning model implemented by the one or more hardware processors, a current spatial location of the user with a predefined precision range at centimeter (cm) level by identifying (i) a user specific area in the facility using the surrounding recognition machine learning model trained with a plurality of real world images of the facility and (ii) the current spatial location of the user with respect to the identified user specific area by triangulating input data received from a plurality of sensors; determine, by the one or more hardware processors, an optimal path from the current location to the destination using the nested environment data, wherein the optimal path from the current location to the destination is categorized as at least one of (i) a convenient path (ii) multi-destination path, (iii) shortest path in accordance with one or more user constraints;” which teaches using a map with landmarks for an enclosed area/building and tracking a user; [0031] “Also, each landmark is tagged to a unique reference surroundings for identification based on one or more parameters such as images of the landmark, text or signages present, direction of the landmark, additional images like QR codes or pictures, Wi-Fi signal strength and magnetic field intensity of the area. These one or more parameters distinguish one landmark from other landmarks. Furthermore, direction of indoor environment such as geospatial orientation and GPS location are captured to tag the nested environment data to respective” tagging of landmarks (static objects); As both are in the same field of endeavor, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine Vaughn’s system with the coordinate tagging of objects limitation disclosed in Jadhav with reasonable expectation of success. The motivation for doing so would have been to provides user navigation with last meter precision, and no hardware and internet dependency, see Jadhav [Abstract]. Vaughn discloses the use of imaging to capture user information and location identification, but does not explicitly disclose object information. However, Desai teaches: capture, via at least one camera, user information associated with the user and object information associated with one or more static objects in proximity to the user, when the user and the one or more static objects are within a Field of View (FOV) of the at least one camera, In [col.2, ln.34-44], “For example, the inventory management system may maintain information indicative of location of a user in the facility, a quantity of items stowed or held at particular inventory locations, what items a particular user is handling, environmental status of the facility, and so forth. The inventory management system may use various techniques to process and analyze the sensor data. For example, a machine vision system may process the image data to identify objects, track objects, and so forth, at the facility or in other settings”. identify the current location of the user within the closed premises based on the comparing. In [col.17 ln.20 - col.19 ln.40], “Continuing the example, the data processing 40 parameters 338 may specify that the location of the user 116 in the facility 102 is to be determined at particular intervals, such as every 10 seconds”. As both are in the same field of endeavor, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine Vaughn’s system with the identifying objects and user attributes limitation disclosed in Desai with reasonable expectation of success. The motivation for doing so would have been to monitor the precise location or movement of inventory, users, and other objects within the facility., see [col.1, ln.25-26]. Regarding Claims 2, 10, and 18, Vaughn recites the following limitation dependent on claim 1,9, and 17: wherein the processor-executable instructions further cause the processor to receive a user input related to a destination location from the user, upon a successful user authentication. See [0027], “each of computing devices 250A-N may host a participating software application ("participation application"), such as participation application 251 at computing device 250A, for participating in making and navigating of maps in communication with mapping mechanism 110 at computing device 100 over one or more communication medium(s) 230, such a cloud network, the Internet, etc.“. Also, in [0028] Referring back to mapping mechanism 110, reception/ verification logic 201 may be used to receive any information or data, such as request for participation, from one or more of computing devices 250A-N, where reception/ verification logic 201 may be further to verify or authenticate computing devices 250A-N before and/or during their participation in various mapping tasks, as described throughout this document”. Regarding Claims 3, 11, and 19, Vaughn recites the following limitation dependent on claims 2, 10, and 18: wherein the processor-executable instructions further cause the processor to: determine an optimal route from a plurality of routes based on the current location of the user, the destination location, and the user information; See [0034], “… data analytic engine 207 may be further used to analyze characteristics relating to the route taken by the user, such as whether the route is the shortest route, fastest route, recommended route based on any number of factors, such as (without limitations) company policies, local environment”. provide recommendations to the user for navigating inside the closed premises to reach the destination location based on the optimal route. See [0056], ‘recommendation engine 215 may be triggered and provided through navigation/communication logic 57 to offer instant directions, recommendations, etc., to end-users through their respective computing devices 250A-N. For example, recommendation engine 215 may be capable of accessing any number and type of maps created by map building logic 213 and stored at one or more database(s) 225 to serve the users with any amount and type of information about various locations, spaces, landmarks, etc., within a facility as requested by the user”. Regarding Claims 5 and 13, Vaughn recites the following limitation dependent on claims 3 and 11: further comprising: dynamically capturing motion of the user while the user starts navigating on the optimal route, wherein the motion of the user during the navigation is captured via each of a set of cameras along the optimal route; See Vaughn [0034], “… data analytic engine 207 may be further used to analyze characteristics relating to the route taken by the user, such as whether the route is the shortest route, fastest route, recommended route based on any number of factors, such as (without limitations) company policies, local environment”. validating whether the motion of the user is on the optimal route. In [0069], “Further, computing devices 250A and 250B may be monitored by any number and type of sensors 305A-D at their corresponding location points to further determine paths or routes 309A, 309B taken by users A and B (and their computing devices 250A and 250B), respectively.”. Regarding Claims 6 and 14, Vaughn recites the following limitation dependent on claims 5 and 13: wherein the processor-executable instructions further cause the processor to: determine a new optimal route when the validation is unsuccessful; In [0077], “As further discussed with reference to FIG. 2, this information from block 413 and/or any refined information relating to mapping, locations, paths, etc., received from server side 403 may then be viewed and/or navigated by the user of the client computer using a user interface, such as user interface 261 of FIG. 2, where method 400 on client side 401 ends at block 416”. render new recommendations based on the new optimal route to the user for navigation to the destination location inside the closed premises. In [0079], “At block 431, relevant mapping data and/or recommendations, as facilitated by map building logic 213 and recommendation engine 215 of FIG. 2, are then communicated over to client side 401 using one or more communication medium(s) 230, such as one or more networks, and offered to the user at block 415 using a user interface of the client computer, such as user interface 261 of computing device 250A of FIG. 2, where method 400 on server side 403 ends at block 432.”. Regarding Claims 7 and 15, Vaughn recites the following limitation dependent on claims 1 and 9: wherein the processor-executable instructions further cause the processor to: determine the hash value corresponding to the primary user data; See [0074], “In the illustrated embodiment, table 370 is shown as including any amount and type of data that may be used performing of tasks relating to map building, recommending directions and/or routes, and displaying maps, routes, etc., as described throughout this document. For example, table 370 is shown as identifying users 371, tracking times 373, location coordinates 375, location names 377, third-party application data 379, arm motion interpretations 381, and/or the like”. Examiner note: “hash value” equivalent information is discussed in [0040] as a unique identifier to anonymize the user info. compare the hash value with a plurality of historical hash values corresponding to primary user data of the plurality of users visited the closed premises; In [0041], “learning engine 211 may be triggered to provide the necessary intelligence to take into consideration other forms of data collection, such as arm motions of the users may be accessed from database(s) 225 or observed using one or more cameras and/or one or more sensors, etc., of I/0 components 263 of computing device 250A, which may then be used to identify what a location is being used for and, while implementing a privacy rule and boundary if the location (e.g., bathroom) and/or the act (e.g., using bathroom) of the user is identified as private “. Also, in [0079], As further discussed with reference to FIG. 2, the collected information is then analyzed, filtered, interpreted, etc., on server side 403, such as analyzed at block 425 by data analytic engine 207 and filtered for privacy and boundaries at block 427 by privacy engine 209, as further illustrated with reference to FIG. 4B. At block 429, the analyzed and filtered data is further interpreted based on any additional relevant information (e.g., arm movement, user patterns, time of data, logical conclusions, etc.) as facilitated by learning engine 211 of FIG. 2”. at least one of: identify the user as an existing user when the hash value is equivalent to at least one hash value of the plurality of historical hash values; or identify the user as a new user when the hash value is different from each of the plurality of historical hash values. See [0074], “In the illustrated embodiment, table 370 is shown as including any amount and type of data that may be used performing of tasks relating to map building, recommending directions and/or routes, and displaying maps, routes, etc., as described throughout this document. For example, table 370 is shown as identifying users 371”. Examiner note: “hash value” equivalent information is discussed in [0040] as a unique identifier to anonymize the user info. Regarding Claims 8 and 16, Vaughn does not explicitly disclose primary and secondary user data. However, Desai teaches the following limitations dependent on claims 1 and 9: wherein the primary user data is corresponding to a plurality of constant user attributes and the secondary user data is corresponding to a plurality of variable user attributes. In Desai [col.17,ln.21-31], “facial recognition may be used to identify the user 116. … The facial features include measurements of, or comparisons between, facial fiducials or ordinal points. The facial features may include eyes, mouth, lips, nose, chin, ears, face width, skin texture, three-dimensional shape of the face, presence of eyeglasses, and so forth. Also, in [col.18,ln.49-59], “The different recognition techniques may be used in different situations or in succession. For example, clothing recognition and gait recognition may be used at greater distances between the user 116 and the imaging sensors 120(1) or when the user's 116 face is obscured from view by an imaging sensor 120(1). In comparison, as the user 116 approaches the imaging sensor 120(1) and their face is visible, facial recognition may be used. Once identified, such as by way of facial recognition, one or more of gait recognition or clothing recognition may be used to track the user 116 within the facility 102”. Clothing recognition is a form of plurality variable user attributes (plurality being the various individual articles of clothing on a person), the facial recognition is a corresponding plurality of constant attributes. As both are in the same field of endeavor, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine Vaughn’s device with the identifying user attributes limitation disclosed in Desai with reasonable expectation of success. The motivation for doing so would have been to improve the user experience and reduce operating costs., see [col.4, ln.33-34]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIAN KEITH PALMARCHUK whose telephone number is (571)272-6261. The examiner can normally be reached M-F 7 AM - 5 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Navid Mehdizadeh can be reached at (571) 272-7691. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.K.P./Examiner, Art Unit 3669 /KENNETH M DUNNE/Primary Examiner, Art Unit 3669
Read full office action

Prosecution Timeline

Oct 27, 2023
Application Filed
Jul 24, 2025
Non-Final Rejection — §101, §103, §112
Oct 27, 2025
Response Filed
Dec 09, 2025
Final Rejection — §101, §103, §112
Mar 12, 2026
Response after Non-Final Action
Mar 25, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601854
WEATHER DETECTION FOR A VEHICLE ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12589677
METHOD FOR OPERATING AN ADJUSTMENT SYSTEM FOR AN INTERIOR OF A MOTOR VEHICLE
2y 5m to grant Granted Mar 31, 2026
Patent 12522180
WIPER WASHER CONTROL APPARATUS
2y 5m to grant Granted Jan 13, 2026
Patent 12427833
METHOD AND SYSTEM FOR OPERATING IN-VEHICLE AIR CONDITIONER
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+28.6%)
2y 4m
Median Time to Grant
High
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month