Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-3, and 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over LEE et al. (KR 2015-0097043 A – see annotated citations of attached translation) in view of YOON et al. (KR 2113560 B1).
[Claim 1] LEE discloses (abstract) a walking assistance system for the visually impaired, comprising:
smart glasses 100 wearable by a user, including a camera 110, an image processor 120, and a first wireless communication part 130; and
a cane 200 including a vibrator 210 and a second wireless communication part 230,
wherein the camera is configured to acquire image information about a forward path (see below), and
wherein the image processor is configured to
analyze a type of the forward path and a type of an obstacle (determination of object or pedestrian) based on the image information (see below),
generate guidance information related to the type of the forward path and the type of the obstacle (see claims annotated below).
The eyeglass module 100 includes an image sensor 110 for acquiring an image of a forward object or a pedestrian (hereinafter referred to as a front obstacle) along the direction of sight of the eyeglasses while the visually impaired person moves, A first wireless communication unit 130 for transmitting the image analysis signal analyzed by the image signal analysis unit 120 to the wand module, and a second wireless communication unit 130 for transmitting the image analysis signal analyzed by the image signal analysis unit 120 to the wand module, Respectively. At this time, the image sensor 110 may be a CCD (Charge Coupled Device) sensor or a CMOS (Complementary Metal Oxide Semiconductor) sensor. The image signal analyzing unit 120 analyzes the image obtained from the image sensor 110 to determine whether there is an object or a pedestrian in front of the visually impaired person. In order to analyze the image, a machine learning algorithm So that pedestrians or objects can be detected. In particular, if the obstacle is a pedestrian, the face recognition algorithm (Adaboost) is performed. That is, the face recognition algorithm extracts features for each part so that the facial features can be clearly displayed to detect the face. Thereafter, each weak classifier is created with the extracted features, and at this time, a weak classifier capable of classifying the non-face and non-face images among the weak classifiers is selected. We use a weight classifier to classify a strong classifier and classify it as a strong classifier.
The cane module 200 of the blind person includes a second wireless communication unit 230 for receiving the image analysis signal transmitted from the first wireless communication unit 130 and a second wireless communication unit 230 for receiving the image (Or tactile sense) signal that can be recognized by the blind person according to the analysis signal, and a braille (or tactile) signal for outputting braille (or tactile) signals according to a control signal from the stick control unit 220. [ And a vibrating part 210. At this time, the braille vibration unit 210 includes a vibration motor 212 for implementing a braille signal and a motor driver 211 for driving the vibration motor. The vibration motor 210 uses a PWM motor control method and a time interruption function Control cycle and motor. As described above, the braille vibration unit 210 is attached to the wand and implements the vibration signal through the vibration motor (driving motor) or implements the vibration signal according to the vibration pattern of braille stored in advance in correspondence with the image analysis signal, This can be detected by the sense of touch on the staff.
CLAIMS 1. A method of assisting a visually impaired person in guiding a system in which a spectacle module and a cane module are connected via a wireless network,
(a) acquiring image information about a front obstacle on a road on which the visually impaired person moves through an image sensor mounted on the spectacle module;
(b) determining whether a front obstacle is present by applying pattern recognition and a machine learning algorithm to the image information obtained in the step (a);
(c) transmitting the image analysis signal determined in step (b) to the wand module through the wireless network;
(d) the cane module generates a control signal based on the image analysis signal transmitted in the step (c), and implements a vibration signal in the cane module according to the control signal or stores it in advance in correspondence with the image analysis signal Providing tactile information to a visually impaired person by implementing a vibration signal according to a vibration pattern of the braille;
The method comprising the steps < RTI ID = 0.0 > of: < / RTI >
The method according to claim 6,
Wherein the system further comprises a speaker in the smart module connected to the eyeglass module and the cane module via the wireless network and the cane module,
(e) generating a voice guidance application of the smart device by voice and providing a voice guidance of a direction indication or the like through the speaker;
(f) sensing the front obstacle through the spectacle module and the wand module or outputting the tactile signal to the visually impaired person in the moving direction…
However, LEE fails to explicitly disclose:
an audio output part, and
output the guidance information through the audio output part.
YOON teaches (abstract) in a similar field of invention, an alarm unit for the blind using a goggle unit 30 and a cane unit 20 using an audio output part (speaker 35) which outputs guidance information through it (see citation below), for the purpose of informing a person about obstacle sensing using sensor 31.
In this case, the second output unit 35 may be a simple vibrating element or a bone conduction speaker. In addition, it may be a speaker capable of voice notification, which may not only provide an alarm but also provide information on the location, type, and distance of the obstacle at the same time.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to try using audio output part for providing guidance information in order to provide vocal information about obstacle sensed to assist a blind person avoid obstacle.
[Claim 2] LEE discloses the walking assistance system for the visually impaired according to Claim 1, wherein the image processor generates a virtual Braille block (i.e. based on control signal based on image) based on a result of analysis, wherein the first wireless communication part transmits recognition information to the cane based on recognition of the cane detected through the camera on the generated virtual Braille block, and wherein the vibrator of the cane outputs vibration based on the recognition information.
Description
The cane module 200 of the blind person includes a second wireless communication unit 230 for receiving the image analysis signal transmitted from the first wireless communication unit 130 and a second wireless communication unit 230 for receiving the image (Or tactile sense) signal that can be recognized by the blind person according to the analysis signal, and a braille (or tactile) signal for outputting braille (or tactile) signals according to a control signal from the stick control unit 220. [ And a vibrating part 210. At this time, the braille vibration unit 210 includes a vibration motor 212 for implementing a braille signal and a motor driver 211 for driving the vibration motor. The vibration motor 210 uses a PWM motor control method and a time interruption function Control cycle and motor. As described above, the braille vibration unit 210 is attached to the wand and implements the vibration signal through the vibration motor (driving motor) or implements the vibration signal according to the vibration pattern of braille stored in advance in correspondence with the image analysis signal, This can be detected by the sense of touch on the staff.
Thereafter, the cane control unit generates a control signal based on the transmitted image analysis signal, implements a vibration signal in the cane module according to the control signal, or generates a vibration signal corresponding to the vibration pattern of braille stored in advance in correspondence with the image analysis signal. Signal so that the tactile information can be provided to the visually impaired to be sensed (S240).
[Claim 3] LEE discloses (as for claim 1) the walking assistance system for the visually impaired according to Claim 2, wherein the image processor generates the virtual Braille blocks as a path that avoids an obstacle among walkable paths in the forward path (purpose of system of LEE is to detect obstacle and generate a control signal to control feedback providing to visually impaired which would implicitly include a walkable path to avoid obstacle).
[Claim 9] LEE discloses the walking assistance system for the visually impaired according to Claim 1, further comprising a smart device 400 including a third wireless communication part and an environment setting part (all mobile devices such as smart phone include at least one option to set clock, wireless functions, user preferences, volume, etc.), wherein the user sets functions of the smart glasses and the cane through the environment setting part, and wherein the smart device transmits the set functions to the smart glasses and the cane through the third wireless communication part (see below – all these functions are require for smart device 400 to operate properly with glasses and cane).
The blind person then carries the eyeglass module 100, the cane module 200 and the own smart device 400 to implement a wireless communication between the eyeglass module and the wand module or to perform wireless communication between the eyeglass module, Wireless communication can be implemented. Here, the smart device 400 may be any one of a smart phone, a PDA, and a tablet PC, and may be a mobile system that is portable and easy to communicate with the Internet to provide a personalized learning method and system. In addition, the wireless communication method is based on Wi-Fi communication, and data can be transmitted through short-range wireless communication such as Bluetooth communication, wireless LAN, and ZigBee.
In the smart device 400, a navigation application for a visually impaired person (hereinafter referred to as an app) is installed. The navigation application receives data through Wi-Fi communication, displays the data, And outputs it to the installed speaker (not shown). Accordingly, not only the voice guidance through the speaker but also the glasses module and the cane module can detect a front obstacle when moving according to the navigation application, and can inform the visually impaired person of the moving direction by the touch by the vibration. For example, when the vibration motor is arranged in a line, the vibrating motor is moved in a direction from the upper side to the lower side (top down method) to increase the vibration intensity or the vibration frequency, up method) can be known.
[Claim 10] As applied for claim 1, LEE as modified by YOON discloses a walking assistance method for the visually impaired in a walking assistance system for the visually impaired, the walking assistance system including smart glasses wearable by a user, which include a camera, an image processor, an audio output part, and a first wireless communication part, and a cane, which includes a vibrator and a second wireless communication part, the walking assistance method comprising: acquiring image information about a forward path through the camera; analyzing a type of the forward path and a type of an obstacle based on the image information; generating guidance information related to the type of the forward path and the type of the obstacle; and outputting the guidance information through the audio output part.
Claim(s) 4-6 and 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over LEE et al. (KR 2015-0097043 A – see annotated citations of attached translation) in view of YOON et al. (KR 2113560 B1) further in view of LATTA et al. (KR 20150140807 A).
However, LEE as modified by YOON fails to explicitly disclose:
[Claim 4] the walking assistance system for the visually impaired according to Claim 3, wherein the smart glasses further include an inertial measurement unit (IMU) sensor, wherein the IMU sensor senses three-dimensional position information and rotation information of the camera, and wherein the image processor generates the virtual Braille block based on the three- dimensional position information and rotation information of the camera sensed by the IMU sensor.
LATTA teaches (abstract) in a similar field of invention,
One embodiment of the mobile device 19 includes a network interface 145, a processor 146, a memory 147, a camera 148, a sensor 149 and a display 150, do. The network interface 145 allows the mobile device 19 to connect to one or more networks 180. The network interface 145 may include a wireless network interface, a modem, and / or a wired network interface. The processor 146 causes the mobile device 19 to execute computer-readable instructions stored in the memory 157 to perform the processes described herein. The camera 148 may capture a color image and / or a depth image. The sensor 149 may generate motion and / or orientation information associated with the mobile device 19. In some cases, the sensor 149 may include an inertial measurement unit (IMU). Display 150 may display digital images and / or video. Display 150 may include a see-through display.
Figure 2a shows an embodiment of a mobile device 19 in communication with a second mobile device 5. [ The mobile device 19 may include a see-through HMD. As shown, the mobile device 19 communicates with the mobile device 5 via a wired connection 6. However, the mobile device 19 may also communicate with the mobile device 5 via a wireless connection. As an example, an HMD worn by an end user of an HMD can communicate wirelessly with a second mobile device (e.g., a mobile phone used by an end user) in the vicinity of the end user 2 mobile device may be in a coat pocket). The mobile device 5 may be used to offload computationally intensive processing tasks (e.g., rendering of virtual objects and / or recognition of gestures), and / (E.g., a model of a virtual object) that can be used to provide an augmented reality environment in a mobile device 19 (e.g., using a mobile device 19). The mobile device 19 may provide the mobile device 5 with motion and / or orientation information associated with the mobile device 19. In one example, the motion information includes a velocity or acceleration associated with the mobile device 19, and the orientation information includes an Euler angle that provides rotation information about a particular coordinate system or frame of reference . In some cases, the mobile device 19 may include a motion and orientation sensor, such as an inertial measurement device (IMU), to obtain motion and / or orientation information associated with the mobile device 19.
The right eyeglass leg 202 also includes a biometric sensor 220, an eye tracking system 221, an earphone 230, a motion and orientation sensor 238, a GPS receiver 232, a power supply 239, 237, all of which communicate with the processing unit 236. The biometric sensor 220 may include one or more electrodes that determine the pulse or heart rate associated with the end user of the HMD 200 and a temperature sensor that determines the body temperature associated with the end user of the HMD 200. In one embodiment, the biometric sensor 220 includes a pulse rate measurement sensor that is pressed against the spectacle leg of the end user. Motion and orientation sensor 238 may include a three-axis magnetometer, a three-axis gyro, and / or a three-axis accelerometer. In one embodiment, the motion and orientation sensor 238 may include an inertial measurement device (IMU). The GPS receiver may determine the GPS location associated with the HMD 200. The processing unit 236 may include one or more processors and a memory that stores computer-readable instructions that execute on the one or more processors. The memory may also store other types of data running on the one or more processors.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use inertial sensor to more accurately determine motion and orientation of smart glasses in order to provide proper direction guidance relative to obstacle.
[Claim 5] LATTA discloses the walking assistance system for the visually impaired according to Claim 4, wherein the smart glasses output an alarm through the sound output part based on gaze of the user deviating from a first range by a first angle or more based on the three-dimensional position information of the camera.
The virtual data engine 197 processes the virtual objects and registers the location and orientation of the virtual objects with respect to various maps of the real world environment stored in the memory device 192. The virtual data engine may also render images associated with the virtual object for display to the end user of the computing system 10. In some embodiments, the computing system 10 may use the image obtained from the capture device 20 to determine a six degree of freedom (6 DOF) pose corresponding to an image of a 3D map of the environment. As an example, a 6 DOF pose may include information related to the location and orientation of a mobile device (e.g., an HMD) in the environment. The 6 DOF pose is used to localize the mobile device and to create an image of the virtual object such that the virtual object appears to be in the proper location within the augmented reality environment. Further information relating to determining a 6 DOF pose can be found in U.S. Patent Application No. 13 / 152,220, entitled " Distributed Asynchronous Localization and Mapping for Augmented Reality " have. More information relating to performing pose estimation and / or localization of the mobile device can be found in U.S. Patent Application No. 13 / 017,474, entitled " Mobile Camera Localization Using Depth Maps " can see.
However, LEE as modified by YOON fails to explicitly disclose:
[Claim 6] the walking assistance system for the visually impaired according to Claim 2, wherein the image processor generates the virtual Braille block by coloring the type of the forward path based on a predetermined criterion.
LATTA teaches (abstract) in a similar field of invention, using a virtual coloring function to virtually indicate color of a virtual object to assist a person to visualize objects.
4A to 4D show various embodiments of various virtual object interaction (or interaction) with a virtual object. The virtual object can be generated and displayed in the augmented reality environment based on the three-dimensional model of the virtual object. The three-dimensional model of the virtual object can specify the dimensions and shape of the virtual object. The three-dimensional model of the virtual object may also include various characteristics associated with the virtual object, such as virtual weight, virtual material (e.g., wood, metal or plastic), virtual color and corresponding transparency, and virtual smell.
In some embodiments, the hypothetical grab gesture can be enhanced by displaying a virtual object that is dependent on the orientation of the end user's hand. For example, if the end user's hand is oriented so that the user's own palm is in a raised state (i.e., the end user body is in a raised state), the virtual object portion that is naturally occluded by the end user's hand can be removed (E.g., a rendered pixel corresponding to a virtual object portion occluded by an end user's hand or forearm can be removed), thereby providing a virtual object in the hands of the end user. This "hand closure" technique can simulate a real-world gesture of an object that is partially or totally obscured by an end user's hand. In some cases, upon detection of the catch gesture, the virtual object may be highlighted, pulsed with intense color, or have a virtual wireframe superimposed on the virtual object. The catching gesture can also be enhanced by providing pulse vibrations to the electronic bracelet, such as the electronic bracelet 404 worn by the end user.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to try use coloring function in order to provide further information of an obstacle.
However, LEE as modified by YOON fails to explicitly disclose:
[Claim 8] the walking assistance system for the visually impaired according to Claim 2, wherein the smart glasses receive global positioning system (GPS) information from an external server through the first wireless communication part and generate the virtual Braille block based on the GPS information.
LATTA teaches (abstract) in a similar field of invention, using a GPS receiver 232 to receive GPS information to help provide feedback, such as positioning.
The right eyeglass leg 202 also includes a biometric sensor 220, an eye tracking system 221, an earphone 230, a motion and orientation sensor 238, a GPS receiver 232, a power supply 239, 237, all of which communicate with the processing unit 236. The biometric sensor 220 may include one or more electrodes that determine the pulse or heart rate associated with the end user of the HMD 200 and a temperature sensor that determines the body temperature associated with the end user of the HMD 200. In one embodiment, the biometric sensor 220 includes a pulse rate measurement sensor that is pressed against the spectacle leg of the end user. Motion and orientation sensor 238 may include a three-axis magnetometer, a three-axis gyro, and / or a three-axis accelerometer. In one embodiment, the motion and orientation sensor 238 may include an inertial measurement device (IMU). The GPS receiver may determine the GPS location associated with the HMD 200. The processing unit 236 may include one or more processors and a memory that stores computer-readable instructions that execute on the one or more processors. The memory may also store other types of data running on the one or more processors.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to try adding the function of using a GPS receiver coupled with remote server to help generate virtual Braille block using GPS information in order to accurate positioning information for visually impaired person.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over LEE et al. (KR 2015-0097043 A – see annotated citations of attached translation) in view of YOON et al. (KR 2113560 B1) further in view of GERDES et al. (US 11726484 B1).
However, LEE as modified by YOON fails to explicitly disclose:
[Claim 7] the walking assistance system for the visually impaired according to Claim 2, wherein the image processor analyzes the type of the obstacle based on a bounding box related to the obstacle.
GERDES teaches (c.23, l.26-36) the concept of using a bounding box 410 in edge detection process.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use a bounding box as part of the edge detection process during image analysis in order to more precisely determine an image type.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. BRATHWAITE et al. (US 20200043368 A1) could have been applied similarly to the claims, given that it teaches a similar walking assistance system with a camera to detect hazards and a cane, although appears to lack an audio output part as part of smart glasses.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARLOS E GARCIA whose telephone number is (571)270-1354. The examiner can normally be reached M-Th 9-6pm F 9-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Zimmerman can be reached at (571) 272-3059. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
CARLOS E. GARCIA
Primary Examiner
Art Unit 2686
/Carlos Garcia/Primary Examiner, Art Unit 2686 2/18/2026