DETAILED ACTION
Notice of AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are pending and are rejected.
Drawings
Drawings filled on 11/16/2023 are acceptable for the examination purpose.
Claim Objections
Claim 14:
Claim 14 is objected to because of the following informalities:
Clam 14 includes typographical error and erroneously recites “The method of claim 1;” however, independent claim 1 is not a method claim. Independent claim 11 is a method claim.
For the examination purpose, it is construed that claim 14 depends from the method claim 11, and thus the limitation is construed as The method of claim 11.
Appropriate correction is required.
Claims 15-16:
Based on their dependencies in claim 14, claims 15-16 are also objected to for the same reasons.
Claim Rejections - 35 USC § 112
35 U.S.C. 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 14-16 and 18 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
-Unclear limitations and/or insufficient antecedent basis:
Claims 14 and 18:
Claims recite, “the compliant mechanism.” There is insufficient antecedent basis for the limitation “the compliant mechanism” in the claims.
They depend from the method of claim 11; however, claim 11 recites compliant mechanism units and not “compliant mechanism.”
For the examination purpose, in broadest reasonable interpretation, these limitations are construed as, a compliant mechanism.
Alternatively, for the purpose of compact prosecution, examiner notes that independent claim 1 that is similar to independent claim 11, provides proper antecedent basis for later recited compliant mechanism (recited in the dependent claims 4 and 8) as “a compliant mechanism comprising: one or more compliant mechanism units.” Taking claim 1 as example, applicant may consider amending the independent claim 11, to provide proper antecedent basis for “the compliant mechanism” recited in claims 14 and 18.
Appropriate correction is required.
Claims 15-16:
Based on their dependencies in claim 14, claims 15-16 are also rejected under 35 U.S.C. 112(b) for same reasons.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filling date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
Determining the scope and contents of the prior art.
Ascertaining the differences between the prior art and the claims at issue.
Resolving the level of ordinary skill in the pertinent art.
Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-9 and 11-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over KONG et al. (US20240225945A9) [hereinafter KONG] and further in view of Chalard et al. (US20250165072A1) [hereinafter CHALARD].
Regarding claim 1:
KONG discloses, A device for generating a force in accordance with sensor data captured in an environment of the device, the device comprising: [¶60: Each of the plurality of haptic actuators 106…that exert a force on the skin of the user. The processor 104 may determine an actuation pattern of the plurality of haptic actuators 106…and the plurality of haptic actuators 106 are actuated with a plurality of actuation patterns each corresponding to a respective action that needs to be taken by the user…
¶10: the sensing unit continuously and regularly captures real-time images of an environment. The machine vision unit is configured to determine a walkable area on each of the real-time images using an edge detection algorithm and a semantic segmentation algorithm, and identify the one or more obstacles using an object detection model. The path planning unit is capable of developing the movement route by avoiding the one or more obstacles and transmit to the haptic unit the movement commands based on the movement route.];
a sensor unit that captures sensor data; [¶10: the sensing unit continuously and regularly captures real-time images of an environment.];
a compliant mechanism comprising: one or more compliant mechanism units; and one or more…actuators, each…actuator configured to actuate one of the one or more compliant mechanism units; [¶41: haptic unit 20 includes a haptic arrangement having a plurality of haptic actuators 106 adapted to provide a tactile stimulation to the torso or limbs of the user…
¶60: the plurality of haptic actuators 106 include at least one actuator proximate to the upper left portion of the user's body, at least one actuator proximate to the upper right portion of the user's body, at least one actuator proximate to the lower left portion of the user's body, at least one actuator proximate to the lower right portion of the user's body.];
a processing unit which: extracts a pixel value from the sensor data for each of the one or more compliant mechanism units; and [¶51: the sensing unit 101 comprises one or more sensing devices selected from a group consisting of…light detection and ranging (LiDAR) device 202,…
¶57: The processor 104 then based on this and the processing with the LiDAR data to devise the movement route so that the movement route facilitates the user…The processor 104 can determine the movement route based solely on…the LiDAR data obtained from the LiDAR device 202,…
¶76: on-board AI-based edge detection algorithm 721 is executed to determine the walkable area 725 on each of the real-time images. By performing edge subtraction based on the walkable area 725 with noise removal 722 and appropriate thresholding, a binary mask 726 is generated. The binary mask 726 is a foreground mask of the same size as the original image, with each pixel marked as either a background pixel or a foreground pixel. With the detected object 730, the path planning unit 321B executes algorithms for calculating the goals and route 731 to the target destination 735…the haptic unit 20 is actuated based on a movement route to signify the user 740.];
outputs one or more control signals to actuate each of the one or more compliant mechanism units in accordance with the pixel value extracted for the compliant mechanism unit. [¶51: the sensing unit 101 comprises one or more sensing devices selected from a group consisting of…light detection and ranging (LiDAR) device 202,…
¶57: The processor 104 then based on this and the processing with the LiDAR data to devise the movement route so that the movement route facilitates the user…The processor 104 can determine the movement route based solely on…the LiDAR data obtained from the LiDAR device 202,…
¶76: on-board AI-based edge detection algorithm 721 is executed to determine the walkable area 725 on each of the real-time images. By performing edge subtraction based on the walkable area 725 with noise removal 722 and appropriate thresholding, a binary mask 726 is generated. The binary mask 726 is a foreground mask of the same size as the original image, with each pixel marked as either a background pixel or a foreground pixel. With the detected object 730, the path planning unit 321B executes algorithms for calculating the goals and route 731 to the target destination 735…the haptic unit 20 is actuated based on a movement route to signify the user 740.], but doesn’t explicitly disclose, and
CHALARD discloses, a compliant mechanism comprising: one or more electromagnetic actuators, each electromagnetic actuator configured to actuate one of the one or more compliant mechanism units; and [¶42: means of acquiring the environment, for example, a spectacle frame (10)…acquire data about the environment in real time to provide digital images that control the actions of a haptic transducer, as shown in FIG. 1 . The haptic transducer generates actions in the form of pressure on the skin by electromagnetic or electromechanical actuators,...
¶18: acquiring a real or virtual visual environment…processing the digital representation of the visual environment in order to provide an electrical signal for controlling a haptic interface,…periodically extracting at least one pulsed digital activation pattern for a subset of spikes of the haptic zone. The haptic interface consists of a lumbar belt with an active surface of N×M spikes whose movement is controlled by actuators,…the processing means provides a sequence of P activation frames for the actuators,…
¶75: This first step consists of calculating a depth map of size dmW*dmH from the two images acquired… by lidar,].
Therefore, it would have been obvious to one of ordinary skill in the art before the filling date of the claimed invention to have combined the capability of using electromagnetic actuators for providing force feedback in order to provide comfortable experience to the user with good resolution using the electromagnetic type haptic actuators taught by CHALARD with the system taught by KONG as discussed above in order to have reasonable expectation of success such as to provide comfortable experience to the user with good resolution using the actuators [¶45: The surface of the active matrix formed by the solenoids covers an extended lumbar region, for good resolution and comfort of use.].
Regarding claim 2:
KONG and CHALARD disclose, The device of claim 1, and
KONG further discloses, the sensor unit comprises a light detection and ranging (LiDAR) scanner that outputs LiDAR data and extracting a pixel value for each of the one or more compliant mechanism units comprises:
generating a LiDAR pixel map based on the LiDAR data; and
extracting a pixel value from the LiDAR pixel map for each of the one or more compliant mechanism units. [¶51: the sensing unit 101 comprises one or more sensing devices selected from a group consisting of…light detection and ranging (LiDAR) device 202,…
¶57: The processor 104 then based on this and the processing with the LiDAR data to devise the movement route so that the movement route facilitates the user…The processor 104 can determine the movement route based solely on…the LiDAR data obtained from the LiDAR device 202,…
¶76: on-board AI-based edge detection algorithm 721 is executed to determine the walkable area 725 on each of the real-time images. By performing edge subtraction based on the walkable area 725 with noise removal 722 and appropriate thresholding, a binary mask 726 is generated. The binary mask 726 is a foreground mask of the same size as the original image, with each pixel marked as either a background pixel or a foreground pixel. With the detected object 730, the path planning unit 321B executes algorithms for calculating the goals and route 731 to the target destination 735…the haptic unit 20 is actuated based on a movement route to signify the user 740.].
Regarding claim 3:
KONG and CHALARD disclose, The device of claim 2, and
KONG further discloses, wherein the one or more pixel values extracted from the LiDAR pixel map are indicative of a height of an object at a location in the environment or a distance of an object from the device. [Examiner notes that claim requires only one of the elements separated by “or” and only one of them is given the patentable weight. Accordingly, KONG discloses, height of an object at a location in the environment as described below:
¶76: The binary mask 726 is a foreground mask of the same size as the original image, with each pixel marked as either a background pixel or a foreground pixel. With the detected object 730, the path planning unit 321B executes algorithms for calculating the goals and route 731 to the target destination 735…the haptic unit 20 is actuated based on a movement route to signify the user 740…
¶52: sensing unit 101 is also equipped with a LiDAR device 202,…The sensing unit 101 is configured to detect the low-height (for example, 2 meters in height) environment structures, objects, or obstacles,].
Regarding claim 4:
KONG and CHALARD disclose, The device of claim 2, and
KONG further discloses, the compliant mechanism comprises a plurality of compliant mechanism units; and [¶41: haptic unit 20 includes a haptic arrangement having a plurality of haptic actuators 106 adapted to provide a tactile stimulation to the torso or limbs of the user…
¶60: the plurality of haptic actuators 106 include at least one actuator proximate to the upper left portion of the user's body, at least one actuator proximate to the upper right portion of the user's body, at least one actuator proximate to the lower left portion of the user's body, at least one actuator proximate to the lower right portion of the user's body.];
the processing unit extracts a pixel value for each compliant mechanism unit [¶60: the plurality of haptic actuators 106 include at least one actuator proximate to the upper left portion of the user's body, at least one actuator proximate to the upper right portion of the user's body, at least one actuator proximate to the lower left portion of the user's body, at least one actuator proximate to the lower right portion of the user's body…
¶76: on-board AI-based edge detection algorithm 721 is executed to determine the walkable area 725 on each of the real-time images. By performing edge subtraction based on the walkable area 725 with noise removal 722 and appropriate thresholding, a binary mask 726 is generated. The binary mask 726 is a foreground mask of the same size as the original image, with each pixel marked as either a background pixel or a foreground pixel. With the detected object 730, the path planning unit 321B executes algorithms for calculating the goals and route 731 to the target destination 735…the haptic unit 20 is actuated based on a movement route to signify the user 740.], but doesn’t explicitly disclose, and
CHALARD further discloses, the processing unit extracts a pixel value for each compliant mechanism unit by reducing the sensor data into an array of pixel values, each pixel value corresponding to one of the compliant mechanism units. [¶26: the step of calculating a digital image of N and M haptic pixels comprises processing consisting in eliminating voxels outside a user's traffic lane prior to calculating the digital image of N and M haptic pixels, established from the remaining voxels only....
¶44: This belt (20) is equipped with a set of solenoids arranged on supports (21 to 24) to form a matrix, for example, of 20×40 pixels. These solenoids are arranged to form a regular matrix,…An electronic circuit receives the visual signals and processes them to control the solenoids,…
¶18: The haptic interface consists of a lumbar belt with an active surface of N×M spikes whose movement is controlled by actuators, preferably solenoids,…
¶20: method for processing the digital representation of a visual environment to control a haptic interface consisting of a lumbar belt with an active surface of N×M actuators,…
¶42: spectacle frame (10) equipped with cameras (11, 12) used to acquire data about the environment in real time to provide digital images that control the actions of a haptic transducer, as shown in FIG. 1 . The haptic transducer generates actions in the form of pressure on the skin by electromagnetic or electromechanical actuators,].
Regarding claim 5:
KONG and CHALARD disclose, The device of claim 4, and
CHALARD further discloses, wherein the processing unit calculates the control signal for actuating each compliant mechanism unit in accordance with the pixel value corresponding to the compliant mechanism unit. [¶26: the step of calculating a digital image of N and M haptic pixels comprises processing consisting in eliminating voxels outside a user's traffic lane prior to calculating the digital image of N and M haptic pixels, established from the remaining voxels only....
¶44: This belt (20) is equipped with a set of solenoids arranged on supports (21 to 24) to form a matrix, for example, of 20×40 pixels. These solenoids are arranged to form a regular matrix,…An electronic circuit receives the visual signals and processes them to control the solenoids,…
¶18: The haptic interface consists of a lumbar belt with an active surface of N×M spikes whose movement is controlled by actuators, preferably solenoids,…
¶20: method for processing the digital representation of a visual environment to control a haptic interface consisting of a lumbar belt with an active surface of N×M actuators,…
¶42: spectacle frame (10) equipped with cameras (11, 12) used to acquire data about the environment in real time to provide digital images that control the actions of a haptic transducer, as shown in FIG. 1 . The haptic transducer generates actions in the form of pressure on the skin by electromagnetic or electromechanical actuators,].
Regarding claim 6:
KONG and CHALARD disclose, The device of claim 1, and
CHALARD further discloses, wherein the sensor data comprises two-dimensional image data and the pixel value extracted from the two-dimensional image data comprises a grayscale pixel value. [¶26: the step of calculating a digital image of N and M haptic pixels comprises processing consisting in eliminating voxels outside a user's traffic lane prior to calculating the digital image of N and M haptic pixels, established from the remaining voxels only....
¶42: spectacle frame (10) equipped with cameras (11, 12) used to acquire data about the environment in real time to provide digital images that control the actions of a haptic transducer, as shown in FIG. 1 . The haptic transducer generates actions in the form of pressure on the skin by electromagnetic or electromechanical actuators,…
¶47: The cameras (11, 12) acquire binocular images to reconstruct a digital image with depth information. The first step is to build a grayscale image. For each pixel of the visual image (100) (FIG. 2 ), a haptic image (200) (FIG. 3 ) is transcribed in grayscale depending on the distance of each point from the camera.].
Regarding claim 7:
KONG and CHALARD disclose, The device of claim 1, and
KONG further discloses, wherein the sensor unit comprises ultrasonic proximity detectors, a radar system, or a sonar sensor. [¶52: sensing unit 101 is also equipped with a LiDAR device 202, a RADAR device 203,].
Regarding claim 8:
KONG and CHALARD disclose, The device of claim 1, and
KONG further discloses, the compliant mechanism forms a haptic meta-surface comprising a flexible membrane supported by the one or more compliant mechanism units; and [¶41: the haptic unit 20 includes one or more haptic straps arranged to adjustably surround the user to secure the device body on the user…The haptic unit 20 includes a haptic arrangement having a plurality of haptic actuators 106 adapted to provide a tactile stimulation to the torso or limbs of the user…
¶43: the haptic unit 20 may include adjustable straps to retain the device body in position. The adjustable straps may be made of polypropylene, leather, fiber, silicone, natural rubber, nylon webbing, or other stretchable or non-stretchable materials. On the posterior side of the adjustable straps, a plurality of haptic actuators 106 are arranged to deliver a sensory signal to the user….the plurality of haptic actuators 106 are positioned on the underside of the adjustable straps to form a haptic arrangement adapted to provide a tactile stimulation to the torso or limbs of the user,];
the processing unit controls a shape of the haptic meta-surface by actuating the one or more compliant mechanism units in accordance with the sensor data. [¶41: The haptic unit 20 includes a haptic arrangement having a plurality of haptic actuators 106 adapted to provide a tactile stimulation to the torso or limbs of the user…
¶43: the haptic unit 20 may include adjustable straps to retain the device body in position….On the posterior side of the adjustable straps, a plurality of haptic actuators 106 are arranged to deliver a sensory signal to the user….the plurality of haptic actuators 106 are positioned on the underside of the adjustable straps to form a haptic arrangement adapted to provide a tactile stimulation to the torso or limbs of the user, such as vibration stimulation, push actuation, pull actuation, a temperature stimulation, a compression caused by a change in tension of a haptic strap].
Regarding claim 9:
KONG and CHALARD disclose, The device of claim 1, and
KONG further discloses, wherein each of the one or more compliant mechanism units is configured to actuate a microswitch. [¶43: the plurality of haptic actuators 106 are positioned on the underside of the adjustable straps to form a haptic arrangement adapted to provide a tactile stimulation to the torso or limbs of the user, such as vibration stimulation, push actuation, pull actuation,…
¶60: Each of the plurality of haptic actuators 106 comprises a motorized finger, an eccentric rotating mass, a linear resonant actuator, or any other actuators that exert a force on the skin of the user. The processor 104 may determine an actuation pattern of the plurality of haptic actuators 106 that corresponds to the action that needs to be taken by the user, and the plurality of haptic actuators 106 are actuated with a plurality of actuation patterns each corresponding to a respective action that needs to be taken by the user.
Examiner notes that, an ordinary skilled in the art will understand that KONG teaches actuation of the actuators such that a microswitch is used for activation/deactivation of the actuators].
Regarding claim 11:
KONG discloses, A method for actuating one or more compliant mechanism units in accordance with sensor data, the method comprising: [¶14: method includes (1) receiving a first image and a second image captured by one or more cameras; (2) determining a walkable area using an edge detection algorithm and an AI semantic segmentation algorithm on the second image;…(4) developing a movement route for the user by avoiding the one or more obstacles; and (5) transmitting movement commands to a haptic unit…
¶51: The haptic unit 20 includes a haptic arrangement having a plurality of haptic actuators 106. In certain embodiments, the sensing unit 101 comprises one or more sensing devices selected from a group consisting of one or more camera devices 102, a light detection and ranging (LiDAR) device 202…
¶70: the plurality of haptic actuators 106 are distributed at locations proximate to an upper left section, an upper right section, a lower left section, and a lower right section of the user's body to form a haptic arrangement…
¶57: The processor 104 then based on this and the processing with the LiDAR data to devise the movement route so that the movement route facilitates the user…The processor 104 can determine the movement route based solely on…the LiDAR data obtained from the LiDAR device 202,];
capturing sensor data; [¶10: the sensing unit continuously and regularly captures real-time images of an environment.];
extracting a pixel value from the sensor data for each of the one or more compliant mechanism units; [¶51: the sensing unit 101 comprises one or more sensing devices selected from a group consisting of…light detection and ranging (LiDAR) device 202,…
¶57: The processor 104 then based on this and the processing with the LiDAR data to devise the movement route so that the movement route facilitates the user…The processor 104 can determine the movement route based solely on…the LiDAR data obtained from the LiDAR device 202,…
¶76: on-board AI-based edge detection algorithm 721 is executed to determine the walkable area 725 on each of the real-time images. By performing edge subtraction based on the walkable area 725 with noise removal 722 and appropriate thresholding, a binary mask 726 is generated. The binary mask 726 is a foreground mask of the same size as the original image, with each pixel marked as either a background pixel or a foreground pixel. With the detected object 730, the path planning unit 321B executes algorithms for calculating the goals and route 731 to the target destination 735…the haptic unit 20 is actuated based on a movement route to signify the user 740.];
generating one or more control signals in accordance with the sensor data; and outputting control signals to one or more…actuators, each…actuator configured to actuate one of the one or more compliant mechanism units in accordance with the control signals. [¶51: the sensing unit 101 comprises one or more sensing devices selected from a group consisting of…light detection and ranging (LiDAR) device 202,…
¶57: The processor 104 then based on this and the processing with the LiDAR data to devise the movement route so that the movement route facilitates the user…The processor 104 can determine the movement route based solely on…the LiDAR data obtained from the LiDAR device 202,…
¶76: on-board AI-based edge detection algorithm 721 is executed to determine the walkable area 725 on each of the real-time images. By performing edge subtraction based on the walkable area 725 with noise removal 722 and appropriate thresholding, a binary mask 726 is generated. The binary mask 726 is a foreground mask of the same size as the original image, with each pixel marked as either a background pixel or a foreground pixel. With the detected object 730, the path planning unit 321B executes algorithms for calculating the goals and route 731 to the target destination 735…the haptic unit 20 is actuated based on a movement route to signify the user 740.], but doesn’t explicitly disclose, and
CHALARD discloses, outputting control signals to one or more electromagnetic actuators, each electromagnetic actuator configured to actuate one of the one or more compliant mechanism units in accordance with the control signals. [¶42: means of acquiring the environment, for example, a spectacle frame (10)…acquire data about the environment in real time to provide digital images that control the actions of a haptic transducer, as shown in FIG. 1 . The haptic transducer generates actions in the form of pressure on the skin by electromagnetic or electromechanical actuators,...
¶18: acquiring a real or virtual visual environment…processing the digital representation of the visual environment in order to provide an electrical signal for controlling a haptic interface,…periodically extracting at least one pulsed digital activation pattern for a subset of spikes of the haptic zone. The haptic interface consists of a lumbar belt with an active surface of N×M spikes whose movement is controlled by actuators,…the processing means provides a sequence of P activation frames for the actuators,…
¶75: This first step consists of calculating a depth map of size dmW*dmH from the two images acquired… by lidar,].
Therefore, it would have been obvious to one of ordinary skill in the art before the filling date of the claimed invention to have combined the capability of using electromagnetic actuators for providing force feedback in order to provide comfortable experience to the user with good resolution using the electromagnetic type haptic actuators taught by CHALARD with the method taught by KONG as discussed above in order to have reasonable expectation of success such as to provide comfortable experience to the user with good resolution using the actuators [¶45: The surface of the active matrix formed by the solenoids covers an extended lumbar region, for good resolution and comfort of use.].
Regarding claim 12:
KONG and CHALARD disclose, The method of claim 11, and
KONG further discloses, capturing sensor data comprises the sensor unit capturing light detection and ranging (LiDAR) data; and extracting a pixel value for each of the one or more compliant mechanism units comprises:
generating a LiDAR pixel map based on the LiDAR data; and
extracting a pixel value from the LiDAR pixel map for each of the one or more compliant mechanism units. [¶51: the sensing unit 101 comprises one or more sensing devices selected from a group consisting of…light detection and ranging (LiDAR) device 202,…
¶57: The processor 104 then based on this and the processing with the LiDAR data to devise the movement route so that the movement route facilitates the user…The processor 104 can determine the movement route based solely on…the LiDAR data obtained from the LiDAR device 202,…
¶76: on-board AI-based edge detection algorithm 721 is executed to determine the walkable area 725 on each of the real-time images. By performing edge subtraction based on the walkable area 725 with noise removal 722 and appropriate thresholding, a binary mask 726 is generated. The binary mask 726 is a foreground mask of the same size as the original image, with each pixel marked as either a background pixel or a foreground pixel. With the detected object 730, the path planning unit 321B executes algorithms for calculating the goals and route 731 to the target destination 735…the haptic unit 20 is actuated based on a movement route to signify the user 740.].
Regarding claim 13:
KONG and CHALARD disclose, The method of claim 12, and
KONG further discloses, wherein the one or more pixel values extracted from the LiDAR pixel map are indicative of a height of an object at a location in the environment or a distance of to an object in the environment. [Examiner notes that claim requires only one of the elements separated by “or” and only one of them is given the patentable weight. Accordingly, KONG discloses, height of an object at a location in the environment as described below:
¶76: The binary mask 726 is a foreground mask of the same size as the original image, with each pixel marked as either a background pixel or a foreground pixel. With the detected object 730, the path planning unit 321B executes algorithms for calculating the goals and route 731 to the target destination 735…the haptic unit 20 is actuated based on a movement route to signify the user 740…
¶52: sensing unit 101 is also equipped with a LiDAR device 202,…The sensing unit 101 is configured to detect the low-height (for example, 2 meters in height) environment structures, objects, or obstacles,].
Regarding claim 14:
KONG and CHALARD disclose, The method of claim 11, and
KONG further discloses, a compliant mechanism comprises a plurality of compliant mechanism units; and [¶41: haptic unit 20 includes a haptic arrangement having a plurality of haptic actuators 106 adapted to provide a tactile stimulation to the torso or limbs of the user…
¶60: the plurality of haptic actuators 106 include at least one actuator proximate to the upper left portion of the user's body, at least one actuator proximate to the upper right portion of the user's body, at least one actuator proximate to the lower left portion of the user's body, at least one actuator proximate to the lower right portion of the user's body.
Examiner notes the claim objections and 35 USC 112 rejections as set forth in this office action];
extracting a pixel value for each compliant mechanism unit [¶60: the plurality of haptic actuators 106 include at least one actuator proximate to the upper left portion of the user's body, at least one actuator proximate to the upper right portion of the user's body, at least one actuator proximate to the lower left portion of the user's body, at least one actuator proximate to the lower right portion of the user's body…
¶76: on-board AI-based edge detection algorithm 721 is executed to determine the walkable area 725 on each of the real-time images. By performing edge subtraction based on the walkable area 725 with noise removal 722 and appropriate thresholding, a binary mask 726 is generated. The binary mask 726 is a foreground mask of the same size as the original image, with each pixel marked as either a background pixel or a foreground pixel. With the detected object 730, the path planning unit 321B executes algorithms for calculating the goals and route 731 to the target destination 735…the haptic unit 20 is actuated based on a movement route to signify the user 740.], but doesn’t explicitly disclose, and
CHALARD further discloses extracting a pixel value for each compliant mechanism unit comprises reducing the sensor data into an array of pixel values, each pixel value corresponding to one of the compliant mechanism units. [¶26: the step of calculating a digital image of N and M haptic pixels comprises processing consisting in eliminating voxels outside a user's traffic lane prior to calculating the digital image of N and M haptic pixels, established from the remaining voxels only....
¶44: This belt (20) is equipped with a set of solenoids arranged on supports (21 to 24) to form a matrix, for example, of 20×40 pixels. These solenoids are arranged to form a regular matrix,…An electronic circuit receives the visual signals and processes them to control the solenoids,…
¶18: The haptic interface consists of a lumbar belt with an active surface of N×M spikes whose movement is controlled by actuators, preferably solenoids,…
¶20: method for processing the digital representation of a visual environment to control a haptic interface consisting of a lumbar belt with an active surface of N×M actuators,…
¶42: spectacle frame (10) equipped with cameras (11, 12) used to acquire data about the environment in real time to provide digital images that control the actions of a haptic transducer, as shown in FIG. 1 . The haptic transducer generates actions in the form of pressure on the skin by electromagnetic or electromechanical actuators,].
Regarding claim 15:
KONG and CHALARD disclose, The method of claim 14, and
CHALARD further discloses, calculating a control signal for actuating each compliant mechanism unit in accordance with the pixel value corresponding to the compliant mechanism unit. [¶26: the step of calculating a digital image of N and M haptic pixels comprises processing consisting in eliminating voxels outside a user's traffic lane prior to calculating the digital image of N and M haptic pixels, established from the remaining voxels only....
¶44: This belt (20) is equipped with a set of solenoids arranged on supports (21 to 24) to form a matrix, for example, of 20×40 pixels. These solenoids are arranged to form a regular matrix,…An electronic circuit receives the visual signals and processes them to control the solenoids,…
¶18: The haptic interface consists of a lumbar belt with an active surface of N×M spikes whose movement is controlled by actuators, preferably solenoids,…
¶20: method for processing the digital representation of a visual environment to control a haptic interface consisting of a lumbar belt with an active surface of N×M actuators,…
¶42: spectacle frame (10) equipped with cameras (11, 12) used to acquire data about the environment in real time to provide digital images that control the actions of a haptic transducer, as shown in FIG. 1 . The haptic transducer generates actions in the form of pressure on the skin by electromagnetic or electromechanical actuators,].
Regarding claim 16:
KONG and CHALARD disclose, The method of claim 15, and
CHALARD further discloses, wherein the sensor data comprises two-dimensional image data and the pixel value extracted from the two-dimensional image data comprises a grayscale pixel value. [¶26: the step of calculating a digital image of N and M haptic pixels comprises processing consisting in eliminating voxels outside a user's traffic lane prior to calculating the digital image of N and M haptic pixels, established from the remaining voxels only....
¶42: spectacle frame (10) equipped with cameras (11, 12) used to acquire data about the environment in real time to provide digital images that control the actions of a haptic transducer, as shown in FIG. 1 . The haptic transducer generates actions in the form of pressure on the skin by electromagnetic or electromechanical actuators,…
¶47: The cameras (11, 12) acquire binocular images to reconstruct a digital image with depth information. The first step is to build a grayscale image. For each pixel of the visual image (100) (FIG. 2 ), a haptic image (200) (FIG. 3 ) is transcribed in grayscale depending on the distance of each point from the camera.].
Regarding claim 17:
KONG and CHALARD disclose, The method of claim 11, and
KONG further discloses, wherein the sensor data comprises ultrasonic proximity data, a radar data, or a sonar data. [¶52: sensing unit 101 is also equipped with a LiDAR device 202, a RADAR device 203,].
Regarding claim 18:
KONG and CHALARD disclose, The method of claim 11, and
KONG further discloses, a compliant mechanism forms a haptic meta-surface comprising a flexible membrane supported by the one or more compliant mechanism units; [¶41: the haptic unit 20 includes one or more haptic straps arranged to adjustably surround the user to secure the device body on the user…The haptic unit 20 includes a haptic arrangement having a plurality of haptic actuators 106 adapted to provide a tactile stimulation to the torso or limbs of the user…
¶43: the haptic unit 20 may include adjustable straps to retain the device body in position. The adjustable straps may be made of polypropylene, leather, fiber, silicone, natural rubber, nylon webbing, or other stretchable or non-stretchable materials. On the posterior side of the adjustable straps, a plurality of haptic actuators 106 are arranged to deliver a sensory signal to the user….the plurality of haptic actuators 106 are positioned on the underside of the adjustable straps to form a haptic arrangement adapted to provide a tactile stimulation to the torso or limbs of the user,
Examiner notes the 35 USC 112 rejections as set forth in this office action];
the one or more compliant mechanism units are actuated in accordance with the sensor data to control a shape of the haptic meta-surface. [¶41: The haptic unit 20 includes a haptic arrangement having a plurality of haptic actuators 106 adapted to provide a tactile stimulation to the torso or limbs of the user…
¶43: the haptic unit 20 may include adjustable straps to retain the device body in position….On the posterior side of the adjustable straps, a plurality of haptic actuators 106 are arranged to deliver a sensory signal to the user….the plurality of haptic actuators 106 are positioned on the underside of the adjustable straps to form a haptic arrangement adapted to provide a tactile stimulation to the torso or limbs of the user, such as vibration stimulation, push actuation, pull actuation, a temperature stimulation, a compression caused by a change in tension of a haptic strap].
Regarding claim 19:
KONG and CHALARD disclose, The method of claim 11, and
KONG further discloses, wherein each of the one or more compliant mechanism units is configured to actuate a microswitch. [¶43: the plurality of haptic actuators 106 are positioned on the underside of the adjustable straps to form a haptic arrangement adapted to provide a tactile stimulation to the torso or limbs of the user, such as vibration stimulation, push actuation, pull actuation,…
¶60: Each of the plurality of haptic actuators 106 comprises a motorized finger, an eccentric rotating mass, a linear resonant actuator, or any other actuators that exert a force on the skin of the user. The processor 104 may determine an actuation pattern of the plurality of haptic actuators 106 that corresponds to the action that needs to be taken by the user, and the plurality of haptic actuators 106 are actuated with a plurality of actuation patterns each corresponding to a respective action that needs to be taken by the user.
Examiner notes that, an ordinary skilled in the art will understand that KONG teaches actuation of the actuators such that a microswitch is used for activation/deactivation of the actuators].
Claim(s) 10 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over KONG and CHALARD and further in view of Jones et al. (US20050261561A1) [hereinafter Jones].
Regarding claim 10:
KONG and CHALARD disclose, The device of claim 1, but they do not explicitly disclose, and
Jones discloses, wherein each of the one or more compliant mechanism units is configured to open or close a microfluidic channel. [¶6: system and method for determining intravenous blood levels of a target compound contained in a blood vessel of a patient….detecting concentrations of the target compound within a patient's blood using a sensor device configured to optically test blood at a location within the blood vessel…calculating a measured amount of a therapeutic compound to administer into the patient's bloodstream based on the concentrations of the target compound in the blood. The measured amount of therapeutic compound may then be pumped through the catheter into the patient's blood…
¶22: A pump 208 may be located directly in contact with or may be coupled with the IV tubing and can “force” or pump the appropriate amount of fluid down the IV tubing, through the IV catheter, and into the blood vessel…
¶55: Blood glucose concentrations can be determined by the application of coherent lidar techniques to an intravenous blood glucose monitor. The back-reflection system of the present embodiment allows for ease of implementation for an intravenous sensor due to the geometry of the sensor.].
Therefore, it would have been obvious to one of ordinary skill in the art before the filling date of the claimed invention to have combined the capability of opening or closing a microfluidic channel by each of the one or more compliant mechanism units in order to provide precise amount of fluid to a patient by activating catheter/microfluidic channel based on precise measurement of patient health parameter using the technique of patient medical device control based on the sensed parameter via lidar sensing and processing taught by Jones with the system taught by KONG and CHALARD as discussed above in order to have reasonable expectation of success such as to provide precise amount of fluid to a patient by activating catheter/microfluidic channel based on precise measurement of patient health parameter using the technique of patient medical device control based on the sensed parameter via lidar sensing and processing [¶6: detecting concentrations of the target compound within a patient's blood using a sensor device configured to optically test blood at a location within the blood vessel…calculating a measured amount of a therapeutic compound to administer into the patient's bloodstream based on the concentrations of the target compound in the blood….¶60: to minimize interference with the optical glucose signal and thus maximize the reliability and accuracy of the glucose measurement.].
Regarding claim 20:
KONG and CHALARD disclose, The method of claim 11, but they do not explicitly disclose, and
Jones discloses, wherein each of the one or more compliant mechanism units is configured to open or close a microfluidic channel. [¶6: system and method for determining intravenous blood levels of a target compound contained in a blood vessel of a patient….detecting concentrations of the target compound within a patient's blood using a sensor device configured to optically test blood at a location within the blood vessel…calculating a measured amount of a therapeutic compound to administer into the patient's bloodstream based on the concentrations of the target compound in the blood. The measured amount of therapeutic compound may then be pumped through the catheter into the patient's blood…
¶22: A pump 208 may be located directly in contact with or may be coupled with the IV tubing and can “force” or pump the appropriate amount of fluid down the IV tubing, through the IV catheter, and into the blood vessel…
¶55: Blood glucose concentrations can be determined by the application of coherent lidar techniques to an intravenous blood glucose monitor. The back-reflection system of the present embodiment allows for ease of implementation for an intravenous sensor due to the geometry of the sensor.].
Therefore, it would have been obvious to one of ordinary skill in the art before the filling date of the claimed invention to have combined the capability of opening or closing a microfluidic channel by each of the one or more compliant mechanism units in order to provide precise amount of fluid to a patient by activating catheter/microfluidic channel based on precise measurement of patient health parameter using the technique of patient medical device control based on the sensed parameter via lidar sensing and processing taught by Jones with the method taught by KONG and CHALARD as discussed above in order to have reasonable expectation of success such as to provide precise amount of fluid to a patient by activating catheter/microfluidic channel based on precise measurement of patient health parameter using the technique of patient medical device control based on the sensed parameter via lidar sensing and processing [¶6: detecting concentrations of the target compound within a patient's blood using a sensor device configured to optically test blood at a location within the blood vessel…calculating a measured amount of a therapeutic compound to administer into the patient's bloodstream based on the concentrations of the target compound in the blood….¶60: to minimize interference with the optical glucose signal and thus maximize the reliability and accuracy of the glucose measurement.].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is listed in the PTO-892 Notice of Reference Cited document.
Azdoud et al. (US20220323736A1) - Robotic tattooing and related technologies:
¶7: a machine vision device positioned to obtain one or more images of the portion of skin, and at least one controller. The controller can be configured to calculate a skin position and/or a skin deformation based on the obtained images and/or sensor signals, and control a puncture depth based at least in part on at least one of: the skin position, the skin deformation, or other characteristic(s) of the portion of skin.
You et al. (US20170344116A1) - Haptic output methods and devices:
¶25: A semantic-aware tactile (haptic) sensing device 200 may comprise a multimodality semantic mixer 230 and haptic rendering engine 240. For example, according to the semantics of 3D map data (e.g. tree, glass wall, buildings), or any other data on object properties, the multimodality semantic mixer 230 converts the property data into a format that is able to be rendered on the haptic rendering device.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED SHAFAYET whose telephone number is (571)272-8239. The examiner can normally be reached M-F 8:30 AM-5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kenneth Lo can be reached at (571) 272-9774. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.S./
Patent Examiner,
Art Unit 2116
/KENNETH M LO/Supervisory Patent Examiner, Art Unit 2116