Detailed Action
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/20/2025 has been entered.
Rejections Under 35 U.S.C. §102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 6, 11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Vaillant (US PG Publication 2018/0235833).
Regarding Claim 1, Vaillant (US PG Publication 2018/0235833) discloses a method for instructing a user (indicating by haptic stimulation [0201]) of a wearable imaging device (head haptic tool of Fig. 2 having IR cameras 13a and 13b, [00217], [0223]) to point the wearable imaging device in a direction of a selected object (calculates in real time information of a trajectory to follow by the user [0202]; cameras oriented toward the user gaze direction [0223]), the method comprising:
capturing a plurality of frames (camera connected to an image recognizing computer program [0074]), each comprising an image of a scenery comprising the object (specific objects of the environment [0104], [0185]);
detecting a motion of the object within the scenery (IR cameras 3b allow to localize IR objects in the environment so as to determine in real time the position of the user with respect to these IR objects [0189]; position, speed, and acceleration, direction, and orientation of the reference are provided to the user [0056]-[0057]);
and providing a haptic signal to the user (indicating by haptic stimulation [0201]), the haptic signal informing the user how to move the wearable imaging device (calculates in real time information of a trajectory to follow by the user [0202]; cameras oriented toward the user gaze direction [0223]) to keep the object within the captured scenery (reference user situated downstream on the same trajectory is constituted by a camera system worn by the user, [0081], is the moving primary gate [0035], and its position speed and acceleration, direction, and orientation are provided to the user [0056]-[0057], i.e., the user keeps the reference target in the field of view; see also paragraphs [0289]-[0290]: since cameras oriented toward the user gaze direction [0223], and a vibrating system [0294] aids the user in directing the gaze toward the “correct direction” [0289]-[0290]; i.e., including the left and right limits of the “primary gate” [0270]; i.e., the haptic feedback system navigates the user into keeping the gaze and therefore cameras in the desired orientation),
the haptic signal comprising at least one of:
a measure of the direction of motion (transmitting to the user the position of the object [0056]; transmitting to the user the speed of the object including direction and orientation, i.e., velocity [0056]-[0057]) of the object within the frame (cameras detect waves radiated by objects [0104]);
a measure of the speed of motion (transmitting to the user the speed of the object including direction and orientation, i.e., velocity [0056]-[0057]) of the object within the frame (cameras detect waves radiated by objects [0104]);
and a measure of the distance of the object (available information is its distance [0059]) from the edge of the frame.
Regarding Claim 6, Vaillant (US PG Publication 2018/0235833) discloses a non-transitory computer readable medium including instructions that, when executed by at least one processor of a mobile communication device, cause the at least one processor to perform operations (software [0081]; microprocessor, a microcontroller, an on-board system, a FPGA or an ASIC [0093])…. The remainder of Claim 6 is rejected on the grounds provided in Claim 1.
Regarding Claim 11, Vaillant (US PG Publication 2018/0235833) discloses a processing device communicatively coupled to the imaging device and operative to execute a first software module (software [0081]; microprocessor, a microcontroller, an on-board system, a FPGA or an ASIC [0093]; image recognizing computer program); the processing device additionally communicatively coupled to at least one user interface device and additionally operative to execute a second software module (software [0081]; microprocessor, a microcontroller, an on-board system, a FPGA or an ASIC [0093] for haptic stimulation [0201]). The remainder of Claim 11 is rejected on the grounds provided in Claim 1.
Rejections Under 35 U.S.C. §103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2, 7, 12 are rejected under 35 U.S.C. 103 as being unpatentable over Vaillant (US PG Publication 2018/0235833) in view of Yuan (CN 107580199 A).
Regarding Claim 2, Vaillant (US PG Publication 2018/0235833) discloses the method according to claim 1, additionally comprising:
detecting a motion of the object comprising detecting at least one of the direction of motion of the object within the frame (position, speed, and acceleration, direction, and orientation are provided to the user [0056]-[0057]);
the speed of motion of the object within the frame (position, speed, and acceleration, direction, and orientation are provided to the user [0056]-[0057]);
and providing a haptic signal to the user (indicating by haptic stimulation [0201]).
Vaillant does not disclose, but Yuan (CN 107580199 A) teaches detecting …the distance of the object from an edge of the frame (target distance to boundary – for camera handover, p. -10).
One of ordinary skill in the art before the application was filed would have been motivated to measure the distance of the target of Vaillant to the edge of the frame, as in Yuan, because the trajectory of targets, even after they leave the field of view, is especially important for the eyes-free, hands-free system of Vaillant, which enables visually impaired persons to navigate their environment unassisted (Vaillant [0001]).
Regarding Claim 7, the claim is rejected on the grounds provided in Claim 2.
Regarding Claim 12, the claim is rejected on the grounds provided in Claim 2.
Claim(s) 3-4, 8-9, 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Vaillant (US PG Publication 2018/0235833) in view of Yan (CN 105930382 A).
Regarding Claim 3, Vaillant (US PG Publication 2018/0235833) discloses the method according to claim 1, additionally comprising at least one of:
detecting the object within the scenery (camera connected to an image recognizing computer program [0074]; specific objects of the environment [0104], [0185]) the detecting comprising:
collecting a plurality of images of the object from different angles (stereoscopic vision [0078]).
Vaillant does not disclose, but Yan (CN 105930382 A) teaches
producing a 3D model of the object based on the collected plurality of images of the object (constructing a 3D scene model based on 2D image, Background, known in the art, p.1);
using the 3D model, rendering a plurality of images of the object from intermediate angles (from the 3D model, generate several different 2D images, S2, p.1; 10-50 views with different backgrounds, light conditions, p.2; k-view projection of 10-50 images evenly distributed around the object angle to cover all the viewing surface, p.3-4) to create a training collection of images (establishing convolutional neural network for calculating similarity between the 2D picture and the corresponding 3D model, S3, p.2);
using the training collection of images, training an imaging AI-model for recognizing the object (establishing convolutional neural network for calculating similarity between the 2D picture and the corresponding 3D model, S3, p.2);
and using the imaging AI-model to detect the object in the captured scenery (S4 and S5, inputting a 2D image into the CNN and calculating the similarity degree… to complete the search, p.2).
One of ordinary skill in the art before the application was filed would have been motivated to implement the “image recognizing”/ “image processing” of Vaillant using a convolutional neural network, as taught by Yan, because CNNs have been recognized as the preeminent image classification model, and Yan suggests that while 3D model matching has many benefits over 2D recognition, it takes too long and cannot be done in real-time; 2D-3D matching preserves some of the benefits of 3D matching while being executable in real-time (p.1), providing efficiency and accuracy to object detection.
Regarding Claim 4, Vaillant (US PG Publication 2018/0235833) discloses the method according to claim 3.
Vaillant does not disclose, but Yan (CN 105930382 A) teaches additionally comprising at least one of:
receiving from the imaging AI-model a measure of validity of the detection of the object in the scenery (calculation result of the similarity degree, S4, p.2, p.5);
based on the measure of validity of detection, instructing the rendering of at least one object (from the 3D model, S2, p.1, p.3-4) from at least one intermediate angle (generate several different 2D images, S2, p.1, p.3-4) to add to the training collection of images (from the 3D model, generate several different 2D images, S2, p.1; 10-50 views with different backgrounds, light conditions, p.2; k-view projection of 10-50 images evenly distributed around the object angle to cover all the viewing surface, p.3-4);
and instructing the training of a new imaging AI-model for recognizing the object.
One of ordinary skill in the art before the application was filed would have been motivated to implement the “image recognizing”/ “image processing” of Vaillant using a convolutional neural network, as taught by Yan, because CNNs have been recognized as the preeminent image classification model, and Yan suggests that while 3D model matching has benefits over 2D recognition, it takes too long and cannot be done in real-time; 2D-3D matching preserves some of the benefits of 3D matching while being executable in real-time (p.1), providing efficiency and accuracy to object detection.
Regarding Claim 8, the claim is rejected on the grounds provided in Claim 3.
Regarding Claim 9, the claim is rejected on the grounds provided in Claim 4.
Regarding Claim 13, the claim is rejected on the grounds provided in Claim 3.
Regarding Claim 14, the claim is rejected on the grounds provided in Claim 4.
Claim(s) 5, 10, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Vaillant (US PG Publication 2018/0235833) in view of Cardenoso (NPL “Deep Reinforcement Learning for Haptic Shared Control in Unknown Tasks,” arXiv 2021).
Regarding Claim 5, Vaillant (US PG Publication 2018/0235833) discloses the method according to claim 1, additionally comprising:
detecting a motion of the wearable imaging device responsive to the haptic signal provided to the user (real-time determination of user’s position based on GPS, inertial navigator, accelerometer [0189]-[0196]).
Vaillant does not disclose but Cardenoso (NPL “Deep Reinforcement Learning for Haptic Shared Control in Unknown Tasks,” arXiv 2021) teaches creating a training collection of responses (samples to train the agent, Section III.B, RL-Controller) comprising a plurality of motions of the [] device (user velocity pH, Section III.B, RL-Controller) responsive to a respective haptic signals provided to the user (assistive force ƒH, Section III.B, RL-Controller);
using the training collection of responses, training a signaling AI-model (designing a controller based on the deep deterministic policy gradient (DDPG) algorithm to provide the
assistance, Abstract);
using the signaling AI-model, producing the haptic signal provided to the user (experimental results, Section VII; the model learned to use lower assistive forces and give more control to the user, achieved stable convergence, Section VII.C).
One of ordinary skill in the art before the application was filed would have been motivated to supplement the calculating means 6 of Vaillant with a haptic-feedback agent trained by the method of Cardenoso because Cardenoso teaches that the challenge of haptic feedback is providing optimal forces to assist the task by minimizing the time taken to perform the task and minimizing user resistance to the feedback (Abstract). In other words, by learning the user responses, the haptic agent can provide the optimal signal to help and not hinder completing the target task.
Regarding Claim 10, the claim is rejected on the grounds provided in Claim 5.
Regarding Claim 15, the claim is rejected on the grounds provided in Claim 5.
Response to Arguments
Applicant’s remarks filed 11/20/2025 are unpersuasive.
Applicant argues that Claim 1 provides the user with 2-D data of the object within the frame, whereas Vaillant provides the user with 3-D data of the object within the frame. Remarks filed 11/20/2025 at pp. 8-9. This is unpersuasive for several reasons: 1) Applicant asserts limitations that are not claimed; 2) Applicant concludes without reason that Vaillant presents 3-D data; 3) the open-ended transitional phrase “comprising” in Applicant’s preamble leaves the claim open to processing 3-D data, even if 2-D data were claimed.
Applicant relies on the word “frame” in Claim 1 to assert that all measurements provided in Claim 1 are 2-D measurements. Remarks 11/20 at p. 9. This is unpersuasive. The word “frame” refers to the image frame captured by the camera in the image capture subroutine. Spec. at 26. There is no foundation in science, math, logic, or language to conclude that all computations performed on a 2-D image result in only 2-D data. Applicant must cite a reasoning for drawing such a conclusion.
Applicant relies on the word “vector” in Vaillant at [0057] to conclude that Vaillant deduces 3-D data from the 2-D image captured by the camera. Remarks 8/14/2020 at 8. The conclusion based on the word “vector” has no rational basis. There are 2-D vectors, e.g., (x, y); and there are 3-D vectors, e.g., (x, y, z); and there are n-dimensional vectors, e.g., (x1, x2, …, xn). There is no basis in the art to conclude that because Vaillant computes a vector, it necessarily provides 3-D data.
Lastly, Applicant argues that 2-D data used in the invention is simpler than 3-D data. Remarks 11/20 at 9. This is moot. Even if 2-D data were claimed, 2-D data is a subset of 3-D data, and Applicant’s invention is a method “comprising…” leaving room for the reference to include additional features and still read on the claims. See MPEP 2111.03.
Applicant states, “The Office Action acknowledges that Vaillant does not teach ‘within the frame’….” Remarks 11/20 at 9. Examiner disagrees. Nowhere does the Office Action acknowledge that Vaillant does not teach “within the frame.”
Applicant argues that Vaillant does not disclose “informing the user how to move the wearable imaging device to keep the object within the captured scenery,” Remarks 11/20 at 9, but this is unpersuasive because Vaillant discloses that the system enables the user to “direct his/her gaze toward the correct direction,” which means, because the camera is oriented in the direction of the gaze, “the correct direction” remains in the captured scenery. Vaillant at [0289]. Applicant says that Vaillant does not disclose a haptic stimulus guide to the user (Remarks 11/20 at 9), but Vaillant discloses a vibrating system—which is haptic, by definition, to indicate direction [0294]. As a result, one of ordinary skill in the art knows that Vaillant discloses informing the user how to move the wearable imaging device to keep the object within the captured scenery.
Regarding Claim 2, Applicant argues that neither Vaillant nor Yuan “measure relative to the frame.” This argument is unpersuasive because it argues limitations that are not claimed. The claimed limitation is “detecting … at least one of: the direction of motion of object within the frame….” Claim 2. Vaillant discloses indicating to the user (wearer of the camera device) the direction of the “reference user,” a person downstream in the field of view of the camera system. Vaillant at [0044]-[0045], [0056]-[0057]. Claim 2 recites three alternative limitations, and because Vaillant discloses one of the limitations, it discloses the claim.
Applicant’s remarks regarding Claim 4 are unpersuasive because Applicant is not properly interpreting the claim. Remarks at 10. Claim 4 requires only one of receiving the validity, or the instructing to be performed, as indicated by the phrase “at least one of” in the preamble. Since Applicant has not presented arguments against the office action’s showing of receiving, see Remarks at 10, arguments regarding the instructing are moot, as limitations written in the alternative can be mapped to a reference meeting any one of the limitations.
Applicant’s remarks against the mapping of Claim 5 are also unpersuasive. The argument repeats those presented in Remarks filed 8/14/2025 and is unpersuasive on grounds provided in Office Action dated 8/26/2025. Applicant argues that the reference modifies the signals from the haptic device; this is incorrect. The assistive force fH is sent to the haptic device, p. 3; while the next position of the robotic arm pM is sent to the robotic arm, this is immaterial to the rejection. See Fig. 3 and p. 6. That is, one of the outputs of the controller is the haptic feedback to the user, and that haptic feedback is based on the force the user applies to the joystick (user velocity vector pH). This meets the limitations of Claim 5.
Conclusion
The following art are not relied upon but made of record:
US 11602300 B2 - visual identification and positioning technology, a brain-computer interface and a robotic arm to facilitate paralyzed patients to drink water by themselves
US 20150355711 A1 - outputting a haptic effect to a haptic peripheral in response to a zoom state of a virtual camera of a virtual environment
This is a continuation of applicant's earlier Application No. 18/365,661. All claims are identical to, patentably indistinct from, or have unity of invention with the invention claimed in the earlier application (that is, restriction (including lack of unity) would not be proper) and could have been finally rejected on the grounds and art of record in the next Office action if they had been entered in the earlier application. Accordingly, THIS ACTION IS MADE FINAL even though it is a first action in this case. See MPEP § 706.07(b). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHADAN E HAGHANI whose telephone number is (571)270-5631. The examiner can normally be reached M-F 9AM - 5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 571-272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHADAN E HAGHANI/ Examiner, Art Unit 2485