DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Objection to title and specification has been withdrawn in view of amendments. Objection to claim 15 is withdrawn in view of amendments. Claim interpretation under 35 U.S.C. 112(f) has been withdrawn in view of amendments.
Applicant argues on page 17 of the applicant’s remarks that “Without conceding any of the
characterizations of the applied references in the Official Action, Applicant submits that all of these references, individually or in combination, fail to disclose or suggest at least the above- noted features of independent claims 1 and 16.” The applicant also argues on page 17 of the Applicant’s remarks that “As none of the cited art, individually or in combination, disclose or suggest at least the above-noted features of independent claims 1 and 16, Applicant submits the inventions defined by claims 1 and 16, and all claims depending therefrom, are not rendered obvious by the asserted references for at least the reasons stated above.”. The Examiner respectfully disagrees. Shadeed teaches the possibility of separately controlling the brightness of each LED-chip of the matrix allows the generation of driver-specific light distributions (See at least Page 506 Col 2 Para 2). Since brightness is controlled in each LED-chip of the matrix, the matrix can be construed as different region. Therefore, Applicant's arguments filed on 12/01/2025 with respect to claims 1, 4, 8, 11, 13-28 have been fully considered but they are not persuasive.
Claim Objections
Claims 1, 4, 11, 14-16, 19, 22, 24, 25, and 28 are objected to because of the following informalities: lighting-emitting device should read light-emitting device. Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 4, 8, 14, 15, 16 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Shadeed et al (H. Shadeed, J. Wallaschek and S. Mojrzisch, "On Intelligent Adaptive Vehicle Front-Lighting Assistance Systems," 2007 IEEE Intelligent Transportation Systems Conference, Bellevue, WA, USA, 2007, pp. 503-507) (Hereinafter Shadeed) in view of Brodsky (US 20050058323 A1).
Regarding Claim 1, Shadeed teaches a robot comprising:
a lighting-emitting device comprising a plurality of separately-controllable light emitting regions, each of the plurality of separately-controllable light emitting regions comprising a plurality of separately-controllable region-specific light-emitting elements (See at least Page 503 Col 1 “A. Safety and Lighting Technology - … New developments like the Adaptive Front-Lighting System (AFS) can improve the illumination of the road in front of the vehicle by offering the driver an optimal light pattern in nearly every situation.”, Page 506 Col 2 Para 2 “2) LED-Array Headlamp - … the LED-array headlamp does not need moving elements to generate different light distributions. Instead, the light sources are addressed directly. The light distributions are generated by creating an image of a matrix of LED-chips. The possibility of individually controlling each LED-chip of the matrix allows the generation of different shapes of light. Activating or deactivating single LED-chips of the matrix can easily realize assisting light functions; for example a glare-free high-beam function could be generated by switching off one or several of the LED-chips that illuminate the area of the oncoming traffic.”, discloses LED-Array Headlamp which is construed as the lighting-emitting device that has a plurality of light-emitting elements)
a sensor that senses information in each of a plurality of spaces outside of the robot, each of the plurality of spaces outside of the robot corresponding to one of the plurality of separately- controllable light emitting regions (See at least Fig 5. Discloses Headlamp detection filter which indicates the light source unit which may include the LED-Array, Page 505 Col 1 Para 1 “The system capture a colour frame from the image sensor and save it in three different formats, colour, grey, and binary image. Each frame-type is associated to a fast one pass customised filter, which is designed mainly to detect one type of objects at a time. After successful detection, the data are feed to three classifier-agents to determine the class of the object.”): and
a controller configured to:
control the lighting-emitting device to emit light into one or more of the plurality of spaces outside of the robot (See at least Fig 5. Discloses Headlamp detection filter which indicates the light source unit which may include the LED-Array, Page 505 Col 1 Para 1 “The system capture a colour frame from the image sensor and save it in three different formats, colour, grey, and binary image. Each frame-type is associated to a fast one pass customised filter, which is designed mainly to detect one type of objects at a time. After successful detection, the data are feed to three classifier-agents to determine the class of the object.”), and
control the sensor to recognize one or more objects in the one or more of the plurality of spaces outside of the robot based on a reflection of the emitted light from the one or more objects (See at least Page 506 Col 2 Para 2 “2) LED-Array Headlamp - … the LED-array headlamp does not need moving elements to generate different light distributions. Instead, the light sources are addressed directly. The light distributions are generated by creating an image of a matrix of LED-chips. The possibility of individually controlling each LED-chip of the matrix allows the generation of different shapes of light. Activating or deactivating single LED-chips of the matrix can easily realize assisting light functions; for example a glare-free high-beam function could be generated by switching off one or several of the LED-chips that illuminate the area of the oncoming traffic. Using a PWM (Pulse Width Modulation) principal to drive the LED-chips makes it possible to produce different levels of brightness, which enable us to adjust the light intensity according to the road illumination. Activating single chips that contribute to the light distribution above the cut-off line could be used to realize a marking light function. Figure 10 depicts a LED-array prototype as well as a low-beam and high-beam light distribution generated with this array. The possibility of separately controlling the brightness of each LED-chip of the matrix allows the generation of driver-specific light distributions.”, Page 505 Col 1 Para 1 “The system capture a colour frame from the image sensor and save it in three different formats, colour, grey, and binary image. Each frame-type is associated to a fast one pass customised filter, which is designed mainly to detect one type of objects at a time. After successful detection, the data are feed to three classifier-agents to determine the class of the object.”),
wherein the controller controls the lighting-emitting device to control over time at least one of the plurality of separately-controllable region-specific light-emitting elements of at least one of the plurality of separately-controllable light emitting regions to emit one or more light patterns that enable the sensor to recognize the one or more objects (See at least Page 506 Col 2 Para 2 “2) LED-Array Headlamp - … the LED-array headlamp does not need moving elements to generate different light distributions. Instead, the light sources are addressed directly. The light distributions are generated by creating an image of a matrix of LED-chips. The possibility of individually controlling each LED-chip of the matrix allows the generation of different shapes of light. Activating or deactivating single LED-chips of the matrix can easily realize assisting light functions; for example a glare-free high-beam function could be generated by switching off one or several of the LED-chips that illuminate the area of the oncoming traffic. Using a PWM (Pulse Width Modulation) principal to drive the LED-chips makes it possible to produce different levels of brightness, which enable us to adjust the light intensity according to the road illumination. Activating single chips that contribute to the light distribution above the cut-off line could be used to realize a marking light function. Figure 10 depicts a LED-array prototype as well as a low-beam and high-beam light distribution generated with this array. The possibility of separately controlling the brightness of each LED-chip of the matrix allows the generation of driver-specific light distributions.”, Page 505 Col 1 Para 1 “The system capture a colour frame from the image sensor and save it in three different formats, colour, grey, and binary image. Each frame-type is associated to a fast one pass customised filter, which is designed mainly to detect one type of objects at a time. After successful detection, the data are feed to three classifier-agents to determine the class of the object. Each agent has a weight. Each weight is multiplied with the estimated classification probability evaluated from the agent, and then the result is summed and normalized to estimate the percentage of trustiness of classification…”)
However, Shadeed does not explicitly spell out … based on an object class specific probability of
recognition to be higher than or equal to a preset value.
Brodsky teaches … based on an object class specific probability of recognition to be higher than
or equal to a preset value (See at least Para [0030] “Track 310 in FIG. 3A illustrates a typical track 310 of a reflection pattern; the end 311 of the track 310 occurring when the reflection is insufficient to exceed a given threshold value. Track 320 in FIG. 3B, on the other hand, illustrates the track of an illumination pattern that exhibits a relatively continuous pattern, having an intensity above the given threshold value for most of the field of view of the camera…”, Para [0032] “At 510, an image is received, and at 520, the light patterns within the image are identified. As noted above, thresholding techniques may be used to identify only those light patterns that exceed a given threshold. Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns…”, Para [0025] “Thresholding is a technique that is commonly used to reduce the effects caused by the transient illumination of objects, as illustrated in FIGS. 2A-2B. In FIG. 2B, pixels in the image of FIG. 2A that have a luminance level above a given threshold are given a white value, and pixels having a luminance level below the threshold are given a black value…”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Shadeed with the teachings of Brodsky and include the feature of an object class specific probability of recognition to be higher than or equal to a preset value, thereby providing optimal adaptive light irradiation for object recognition which will increase performance accuracy, efficiency and reliability of the robot (See at least Para [0009] “It is a further object of this invention to facilitate the augmentation of existing video-based traffic monitoring systems to support day and night discrete vehicle identification and tracking.”, Para [0032] “… Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns...”, Para [0033] “As is evident from FIG. 2B, however, the high-intensity reflections are often indistinguishable from headlights, and further processing is provided to improve the reliability of the vehicle identification process.”).
Regarding Claim 4, modified Shadeed teaches all the elements of claim 1. Shadeed further teaches the robot of claim 1, further comprising a memory that stores information related to the plurality of spaces and information related to corresponding plurality of space-specific light patterns (See at least 505 Col 1 Para 1 “Our system does the same via detecting the light distributions of the oncoming/leading vehicles and classifying it in real-time using neural network and fuzzy logic supported with a prototypical database. For the reason that we are dealing with an opening environment like vehicle's space; neither one technique nor one set of hypotheses is applicable to be used to detect different types of objects. Therefore we developed our system based on separating objects in three categories Taillamps, Headlamps and Lane markings; which are probably appear mostly in the traffic situations and may be considered as the relevant targets to our system. The algorithm is designed for parallel processing in order to be suitable to be implemented later on FPGA. Figure 5 shows the main structure of the algorithms. The system capture a colour frame from the image sensor and save it in three different formats, colour, grey, and binary image. Each frame-type is associated to a fast one pass customised filter, which is designed mainly to detect one type of objects at a time. After successful detection, the data are feed to three classifier-agents to determine the class of the object. Each agent has a weight. Each weight is multiplied with the estimated classification probability evaluated from the agent, and then the result is summed and normalized to estimate the percentage of trustiness of classification.”, Page 504 Col 1 Para 3 “A. Vehicle's Sensors - …the external sensors or remote-sensors are used to detect the presence of the objects close to the vehicle and to give information about the vehicle's traffic space…”),
wherein, prior to controlling over time the at least one of the plurality of separately- controllable region-specific light-emitting elements of at least one of the plurality of separately- controllable light emitting regions to emit the one or more light patterns, the controller determines whether the sensed information related to the space exists in the memory (See at least 505 Col 1 Para 1 “Our system does the same via detecting the light distributions of the oncoming/leading vehicles and classifying it in real-time using neural network and fuzzy logic supported with a prototypical database. For the reason that we are dealing with an opening environment like vehicle's space; neither one technique nor one set of hypotheses is applicable to be used to detect different types of objects …”), and
based on the sensed information related to the space existing in the memory, controls the lighting-emitting device to emit light based on the information related to the plurality of space- specific light patterns so as to enable the sensor to recognize the one or more objects (See at least Page 506 Col 2 Para 2 “2) LED-Array Headlamp - … the LED-array headlamp does not need moving elements to generate different light distributions. Instead, the light sources are addressed directly. The light distributions are generated by creating an image of a matrix of LED-chips. The possibility of individually controlling each LED-chip of the matrix allows the generation of different shapes of light. Activating or deactivating single LED-chips of the matrix can easily realize assisting light functions; for example a glare-free high-beam function could be generated by switching off one or several of the LED-chips that illuminate the area of the oncoming traffic. Using a PWM (Pulse Width Modulation) principal to drive the LED-chips makes it possible to produce different levels of brightness, which enable us to adjust the light intensity according to the road illumination. Activating single chips that contribute to the light distribution above the cut-off line could be used to realize a marking light function. Figure 10 depicts a LED-array prototype as well as a low-beam and high-beam light distribution generated with this array. The possibility of separately controlling the brightness of each LED-chip of the matrix allows the generation of driver-specific light distributions.”, Page 505 Col 1 Para 1 “The system capture a colour frame from the image sensor and save it in three different formats, colour, grey, and binary image. Each frame-type is associated to a fast one pass customised filter, which is designed mainly to detect one type of objects at a time. After successful detection, the data are feed to three classifier-agents to determine the class of the object. Each agent has a weight. Each weight is multiplied with the estimated classification probability evaluated from the agent, and then the result is summed and normalized to estimate the percentage of trustiness of classification…”) based on the obiect class specific probability of recognition being higher than or equal to the preset value, and
based on the sensed information related to the space not existing in the memory, controls the lighting-emitting device to emit a plurality of random light patterns so as to enable the sensor to recognize the one or more objects (See at least Page 506 Col 2 Para 2 “2) LED-Array Headlamp - … the LED-array headlamp does not need moving elements to generate different light distributions. Instead, the light sources are addressed directly. The light distributions are generated by creating an image of a matrix of LED-chips. The possibility of individually controlling each LED-chip of the matrix allows the generation of different shapes of light. Activating or deactivating single LED-chips of the matrix can easily realize assisting light functions; for example a glare-free high-beam function could be generated by switching off one or several of the LED-chips that illuminate the area of the oncoming traffic. Using a PWM (Pulse Width Modulation) principal to drive the LED-chips makes it possible to produce different levels of brightness, which enable us to adjust the light intensity according to the road illumination. Activating single chips that contribute to the light distribution above the cut-off line could be used to realize a marking light function. Figure 10 depicts a LED-array prototype as well as a low-beam and high-beam light distribution generated with this array. The possibility of separately controlling the brightness of each LED-chip of the matrix allows the generation of driver-specific light distributions.”, Page 505 Col 1 Para 1 “The system capture a colour frame from the image sensor and save it in three different formats, colour, grey, and binary image. Each frame-type is associated to a fast one pass customised filter, which is designed mainly to detect one type of objects at a time. After successful detection, the data are feed to three classifier-agents to determine the class of the object. Each agent has a weight. Each weight is multiplied with the estimated classification probability evaluated from the agent, and then the result is summed and normalized to estimate the percentage of trustiness of classification…”) based on the object class specific probability of recognition being higher than or equal to the preset value.
However, Shadeed does not explicitly spell out … based on the obiect class specific probability of recognition being higher than or equal to the preset value … based on the object class specific probability of recognition being higher than or equal to the preset value.
Brodsky teaches … based on the object class specific probability of recognition to be higher than
or equal to the preset value (See at least Para [0030] “Track 310 in FIG. 3A illustrates a typical track 310 of a reflection pattern; the end 311 of the track 310 occurring when the reflection is insufficient to exceed a given threshold value. Track 320 in FIG. 3B, on the other hand, illustrates the track of an illumination pattern that exhibits a relatively continuous pattern, having an intensity above the given threshold value for most of the field of view of the camera…”, Para [0032] “At 510, an image is received, and at 520, the light patterns within the image are identified. As noted above, thresholding techniques may be used to identify only those light patterns that exceed a given threshold. Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns…”, Para [0025] “Thresholding is a technique that is commonly used to reduce the effects caused by the transient illumination of objects, as illustrated in FIGS. 2A-2B. In FIG. 2B, pixels in the image of FIG. 2A that have a luminance level above a given threshold are given a white value, and pixels having a luminance level below the threshold are given a black value…”)… based on the object class specific probability of recognition to be higher than or equal to the preset value (See at least Para [0030] “Track 310 in FIG. 3A illustrates a typical track 310 of a reflection pattern; the end 311 of the track 310 occurring when the reflection is insufficient to exceed a given threshold value. Track 320 in FIG. 3B, on the other hand, illustrates the track of an illumination pattern that exhibits a relatively continuous pattern, having an intensity above the given threshold value for most of the field of view of the camera…”, Para [0032] “At 510, an image is received, and at 520, the light patterns within the image are identified. As noted above, thresholding techniques may be used to identify only those light patterns that exceed a given threshold. Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns…”, Para [0025] “Thresholding is a technique that is commonly used to reduce the effects caused by the transient illumination of objects, as illustrated in FIGS. 2A-2B. In FIG. 2B, pixels in the image of FIG. 2A that have a luminance level above a given threshold are given a white value, and pixels having a luminance level below the threshold are given a black value…”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Shadeed with the teachings of Brodsky and include the feature of an object class specific probability of recognition to be higher than or equal to a preset value, thereby providing optimal adaptive light irradiation for object recognition which will increase performance accuracy, efficiency and reliability of the robot (See at least Para [0009] “It is a further object of this invention to facilitate the augmentation of existing video-based traffic monitoring systems to support day and night discrete vehicle identification and tracking.”, Para [0032] “… Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns...”, Para [0033] “As is evident from FIG. 2B, however, the high-intensity reflections are often indistinguishable from headlights, and further processing is provided to improve the reliability of the vehicle identification process.”).
Regarding Claim 8, modified Shadeed teaches all the elements of claim 1.
However, Shadeed does not explicitly spell out the robot of claim 4, wherein, when a certain light pattern, among the plurality of random light patterns, causes the object class specific probability of recognition to be higher than or equal to the preset value, the controller stores information related to the certain light pattern in the memory.
Brodsky teaches the robot of claim 4, wherein, when a certain light pattern, among the plurality of random light patterns, causes the object class specific probability of recognition to be higher than or equal to the preset value, the controller stores information related to the certain light pattern in the memory (See at least Claim “3. The traffic monitoring system of claim 1, further comprising a memory for storing prior images from the camera, and wherein the pattern recognizer is further configured to track a path of each of the vehicles based on corresponding headlight patterns in the prior images.”, Para [0025] “Thresholding is a technique that is commonly used to reduce the effects caused by the transient illumination of objects, as illustrated in FIGS. 2A-2B. In FIG. 2B, pixels in the image of FIG. 2A that have a luminance level above a given threshold are given a white value, and pixels having a luminance level below the threshold are given a black value…”, Para [0032] “At 510, an image is received, and at 520, the light patterns within the image are identified. As noted above, thresholding techniques may be used to identify only those light patterns that exceed a given threshold. Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns. Thereafter, combinations of headlight patterns can be associated with each vehicle using further conventional pattern matching techniques, including, for example, rules that are based on consistency of movement among patterns, to pair patterns corresponding to a vehicle, as well as rules that are based on the distance between such consistently moving patterns, to distinguish among multiple vehicles traveling at the same rate of speed.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Shadeed with the teachings of Brodsky and include the feature of when a certain light pattern, among the plurality of random light patterns, causes the object class specific probability of recognition to be higher than or equal to the preset value, the controller stores information related to the certain light pattern in the memory, thereby increase efficiency and reliability by collecting new information for future use which will improve performance of the robot though accurate object recognition (See at least Para [0009] “It is a further object of this invention to facilitate the augmentation of existing video-based traffic monitoring systems to support day and night discrete vehicle identification and tracking.”, Para [0032] “… Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns...”, Para [0033] “As is evident from FIG. 2B, however, the high-intensity reflections are often indistinguishable from headlights, and further processing is provided to improve the reliability of the vehicle identification process.”).
Regarding Claim 14, modified Shadeed teaches all the elements of claim 1. Shadeed further teaches the robot of claim 1, wherein, based on the one or more light patterns (See at least Page 503 Col 1 “A. Safety and Lighting Technology - … New developments like the Adaptive Front-Lighting System (AFS) can improve the illumination of the road in front of the vehicle by offering the driver an optimal light pattern in nearly every situation.”, Page 506 Col 2 Para 2 “2) LED-Array Headlamp - … the LED-array headlamp does not need moving elements to generate different light distributions. Instead, the light sources are addressed directly. The light distributions are generated by creating an image of a matrix of LED-chips. The possibility of individually controlling each LED-chip of the matrix allows the generation of different shapes of light. Activating or deactivating single LED-chips of the matrix can easily realize assisting light functions; for example a glare-free high-beam function could be generated by switching off one or several of the LED-chips that illuminate the area of the oncoming traffic. Using a PWM (Pulse Width Modulation) principal to drive the LED-chips makes it possible to produce different levels of brightness, which enable us to adjust the light intensity according to the road illumination. Activating single chips that contribute to the light distribution above the cut-off line could be used to realize a marking light function. Figure 10 depicts a LED-array prototype as well as a low-beam and high-beam light distribution generated with this array. The possibility of separately controlling the brightness of each LED-chip of the matrix allows the generation of driver-specific light distributions.”) … the controller controls the lighting-emitting device to control over time at least one of the plurality of separately-controllable region-specific light-emitting elements of at least one of the plurality of separately-controllable light emitting regions to emit one or more additional light patterns that enable the sensor to recognize the one or more objects (See at least Page 506 Col 2 Para 2 “2) LED-Array Headlamp - … the LED-array headlamp does not need moving elements to generate different light distributions. Instead, the light sources are addressed directly. The light distributions are generated by creating an image of a matrix of LED-chips. The possibility of individually controlling each LED-chip of the matrix allows the generation of different shapes of light. Activating or deactivating single LED-chips of the matrix can easily realize assisting light functions; for example a glare-free high-beam function could be generated by switching off one or several of the LED-chips that illuminate the area of the oncoming traffic. Using a PWM (Pulse Width Modulation) principal to drive the LED-chips makes it possible to produce different levels of brightness, which enable us to adjust the light intensity according to the road illumination. Activating single chips that contribute to the light distribution above the cut-off line could be used to realize a marking light function. Figure 10 depicts a LED-array prototype as well as a low-beam and high-beam light distribution generated with this array. The possibility of separately controlling the brightness of each LED-chip of the matrix allows the generation of driver-specific light distributions.”, Page 506 Col 2 Para 3 “Using a PWM (Pulse Width Modulation) principal to drive the LED-chips makes it possible to produce different levels of brightness, which enable us to adjust the light intensity according to the road illumination. Activating single chips that contribute to the light distribution above the cut-off line could be used to realize a marking light function. Figure 10 depicts a LED-array prototype as well as a low-beam and high-beam light distribution generated with this array. The possibility of separately controlling the brightness of each LED-chip of the matrix allows the generation of driver-specific light distributions.”)
However, Shadeed does not explicitly spell out … causing the object class specific probability of recognition to be lower than the preset value, … based on the object class specific probability of recognition to be higher than or equal to the preset value.
Brodsky teaches … causing the object class specific probability of recognition to be lower than the preset value (See at least Para [0030] “Track 310 in FIG. 3A illustrates a typical track 310 of a reflection pattern; the end 311 of the track 310 occurring when the reflection is insufficient to exceed a given threshold value. Track 320 in FIG. 3B, on the other hand, illustrates the track of an illumination pattern that exhibits a relatively continuous pattern, having an intensity above the given threshold value for most of the field of view of the camera…”, Para [0032] “At 510, an image is received, and at 520, the light patterns within the image are identified. As noted above, thresholding techniques may be used to identify only those light patterns that exceed a given threshold. Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns…”, Para [0025] “Thresholding is a technique that is commonly used to reduce the effects caused by the transient illumination of objects, as illustrated in FIGS. 2A-2B. In FIG. 2B, pixels in the image of FIG. 2A that have a luminance level above a given threshold are given a white value, and pixels having a luminance level below the threshold are given a black value…”), … based on the object class specific probability of recognition to be higher than or equal to the preset value (See at least Para [0030] “Track 310 in FIG. 3A illustrates a typical track 310 of a reflection pattern; the end 311 of the track 310 occurring when the reflection is insufficient to exceed a given threshold value. Track 320 in FIG. 3B, on the other hand, illustrates the track of an illumination pattern that exhibits a relatively continuous pattern, having an intensity above the given threshold value for most of the field of view of the camera…”, Para [0032] “At 510, an image is received, and at 520, the light patterns within the image are identified. As noted above, thresholding techniques may be used to identify only those light patterns that exceed a given threshold. Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns…”, Para [0025] “Thresholding is a technique that is commonly used to reduce the effects caused by the transient illumination of objects, as illustrated in FIGS. 2A-2B. In FIG. 2B, pixels in the image of FIG. 2A that have a luminance level above a given threshold are given a white value, and pixels having a luminance level below the threshold are given a black value…”).
Regarding Claim 15, Shadeed teaches all the elements of claim 1. Shadeed further teaches … the controller controls the lighting-emitting device to change at least one of the plurality of separately-controllable region-specific light-emitting elements the of at least one of the plurality of separately-controllable light emitting regions to emit one or more additional light patterns that enable the sensor to recognize the one or more objects (See at least Page 506 Col 2 Para 2 “2) LED-Array Headlamp - … the LED-array headlamp does not need moving elements to generate different light distributions. Instead, the light sources are addressed directly. The light distributions are generated by creating an image of a matrix of LED-chips. The possibility of individually controlling each LED-chip of the matrix allows the generation of different shapes of light. Activating or deactivating single LED-chips of the matrix can easily realize assisting light functions; for example a glare-free high-beam function could be generated by switching off one or several of the LED-chips that illuminate the area of the oncoming traffic. Using a PWM (Pulse Width Modulation) principal to drive the LED-chips makes it possible to produce different levels of brightness, which enable us to adjust the light intensity according to the road illumination. Activating single chips that contribute to the light distribution above the cut-off line could be used to realize a marking light function. Figure 10 depicts a LED-array prototype as well as a low-beam and high-beam light distribution generated with this array. The possibility of separately controlling the brightness of each LED-chip of the matrix allows the generation of driver-specific light distributions.”, Page 506 Col 2 Para 3 “Using a PWM (Pulse Width Modulation) principal to drive the LED-chips makes it possible to produce different levels of brightness, which enable us to adjust the light intensity according to the road illumination. Activating single chips that contribute to the light distribution above the cut-off line could be used to realize a marking light function. Figure 10 depicts a LED-array prototype as well as a low-beam and high-beam light distribution generated with this array. The possibility of separately controlling the brightness of each LED-chip of the matrix allows the generation of driver-specific light distributions.”) …
However, Shadeed does not explicitly spell out the robot of claim 1, wherein, based on the one or more light patterns causing the object class specific probability of recognition to be lower than the preset value, … based on the object class specific probability of recognition to be higher than or equal to the preset value.
However, Shadeed does not explicitly spell out … causing the object class specific probability of recognition to be lower than the preset value, … based on the object class specific probability of recognition to be higher than or equal to the preset value.
Brodsky teaches … causing the object class specific probability of recognition to be lower than the preset value (See at least Para [0030] “Track 310 in FIG. 3A illustrates a typical track 310 of a reflection pattern; the end 311 of the track 310 occurring when the reflection is insufficient to exceed a given threshold value. Track 320 in FIG. 3B, on the other hand, illustrates the track of an illumination pattern that exhibits a relatively continuous pattern, having an intensity above the given threshold value for most of the field of view of the camera…”, Para [0032] “At 510, an image is received, and at 520, the light patterns within the image are identified. As noted above, thresholding techniques may be used to identify only those light patterns that exceed a given threshold. Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns…”, Para [0025] “Thresholding is a technique that is commonly used to reduce the effects caused by the transient illumination of objects, as illustrated in FIGS. 2A-2B. In FIG. 2B, pixels in the image of FIG. 2A that have a luminance level above a given threshold are given a white value, and pixels having a luminance level below the threshold are given a black value…”), … based on the object class specific probability of recognition to be higher than or equal to the preset value (See at least Para [0030] “Track 310 in FIG. 3A illustrates a typical track 310 of a reflection pattern; the end 311 of the track 310 occurring when the reflection is insufficient to exceed a given threshold value. Track 320 in FIG. 3B, on the other hand, illustrates the track of an illumination pattern that exhibits a relatively continuous pattern, having an intensity above the given threshold value for most of the field of view of the camera…”, Para [0032] “At 510, an image is received, and at 520, the light patterns within the image are identified. As noted above, thresholding techniques may be used to identify only those light patterns that exceed a given threshold. Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns…”, Para [0025] “Thresholding is a technique that is commonly used to reduce the effects caused by the transient illumination of objects, as illustrated in FIGS. 2A-2B. In FIG. 2B, pixels in the image of FIG. 2A that have a luminance level above a given threshold are given a white value, and pixels having a luminance level below the threshold are given a black value…”).
Regarding Claim 16, Shadeed teaches a robot comprising:
a lighting-emitting device comprising a plurality of separately-controllable light emitting regions, each of the plurality of separately-controllable light emitting regions comprising a plurality of separately-controllable region-specific light-emitting elements (See at least Page 503 Col 1 “A. Safety and Lighting Technology - … New developments like the Adaptive Front-Lighting System (AFS) can improve the illumination of the road in front of the vehicle by offering the driver an optimal light pattern in nearly every situation.”, Page 506 Col 2 Para 2 “2) LED-Array Headlamp - … the LED-array headlamp does not need moving elements to generate different light distributions. Instead, the light sources are addressed directly. The light distributions are generated by creating an image of a matrix of LED-chips. The possibility of individually controlling each LED-chip of the matrix allows the generation of different shapes of light. Activating or deactivating single LED-chips of the matrix can easily realize assisting light functions; for example a glare-free high-beam function could be generated by switching off one or several of the LED-chips that illuminate the area of the oncoming traffic.”, discloses LED-Array Headlamp which is construed as the lighting-emitting device that has a plurality of light-emitting elements);
a sensor that senses information in each of a plurality of spaces outside of the robot, each of the plurality of spaces outside of the robot corresponding to one of the plurality of separately- controllable light emitting regions (See at least Fig 5. Discloses Headlamp detection filter which indicates the light source unit which may include the LED-Array, Page 505 Col 1 Para 1 “The system capture a colour frame from the image sensor and save it in three different formats, colour, grey, and binary image. Each frame-type is associated to a fast one pass customised filter, which is designed mainly to detect one type of objects at a time. After successful detection, the data are feed to three classifier-agents to determine the class of the object.”); and
a controller configured to:
control the lighting-emitting device to emit light into one or more of the plurality of spaces outside of the robot (See at least Page 506 Col 2 Para 2 “2) LED-Array Headlamp - … the LED-array headlamp does not need moving elements to generate different light distributions. Instead, the light sources are addressed directly. The light distributions are generated by creating an image of a matrix of LED-chips. The possibility of individually controlling each LED-chip of the matrix allows the generation of different shapes of light. Activating or deactivating single LED-chips of the matrix can easily realize assisting light functions; for example a glare-free high-beam function could be generated by switching off one or several of the LED-chips that illuminate the area of the oncoming traffic. Using a PWM (Pulse Width Modulation) principal to drive the LED-chips makes it possible to produce different levels of brightness, which enable us to adjust the light intensity according to the road illumination. Activating single chips that contribute to the light distribution above the cut-off line could be used to realize a marking light function. Figure 10 depicts a LED-array prototype as well as a low-beam and high-beam light distribution generated with this array. The possibility of separately controlling the brightness of each LED-chip of the matrix allows the generation of driver-specific light distributions.”, Page 505 Col 1 Para 1 “The system capture a colour frame from the image sensor and save it in three different formats, colour, grey, and binary image. Each frame-type is associated to a fast one pass customised filter, which is designed mainly to detect one type of objects at a time. After successful detection, the data are feed to three classifier-agents to determine the class of the object. Each agent has a weight. Each weight is multiplied with the estimated classification probability evaluated from the agent, and then the result is summed and normalized to estimate the percentage of trustiness of classification…”), and
control the sensor to recognize one or more objects in the one or more of the plurality of spaces outside of the robot based on a reflection of the emitted light from the one or more objects (See at least Page 506 Col 2 Para 2 “2) LED-Array Headlamp - … the LED-array headlamp does not need moving elements to generate different light distributions. Instead, the light sources are addressed directly. The light distributions are generated by creating an image of a matrix of LED-chips. The possibility of individually controlling each LED-chip of the matrix allows the generation of different shapes of light. Activating or deactivating single LED-chips of the matrix can easily realize assisting light functions; for example a glare-free high-beam function could be generated by switching off one or several of the LED-chips that illuminate the area of the oncoming traffic. Using a PWM (Pulse Width Modulation) principal to drive the LED-chips makes it possible to produce different levels of brightness, which enable us to adjust the light intensity according to the road illumination. Activating single chips that contribute to the light distribution above the cut-off line could be used to realize a marking light function. Figure 10 depicts a LED-array prototype as well as a low-beam and high-beam light distribution generated with this array. The possibility of separately controlling the brightness of each LED-chip of the matrix allows the generation of driver-specific light distributions.”, Page 505 Col 1 Para 1 “The system capture a colour frame from the image sensor and save it in three different formats, colour, grey, and binary image. Each frame-type is associated to a fast one pass customised filter, which is designed mainly to detect one type of objects at a time. After successful detection, the data are feed to three classifier-agents to determine the class of the object.”),
wherein the controller, when the one or more objects comprises a plurality of objects,
determines whether types of the plurality of objects are the same (See at least Page 505 Col 1 Para 1 “Our system does the same via detecting the light distributions of the oncoming/leading vehicles and classifying it in real-time using neural network and fuzzy logic supported with a prototypical database. For the reason that we are dealing with an opening environment like vehicle's space; neither one technique nor one set of hypotheses is applicable to be used to detect different types of objects. Therefore we developed our system based on separating objects in three categories Taillamps, Headlamps and Lane markings; which are probably appear mostly in the traffic situations and may be considered as the relevant targets to our system. The algorithm is designed for parallel processing in order to be suitable to be implemented later on FPGA. Figure 5 shows the main structure of the algorithms. The system capture a colour frame from the image sensor and save it in three different formats, colour, grey, and binary image. Each frame-type is associated to a fast one pass customised filter, which is designed mainly to detect one type of objects at a time. After successful detection, the data are feed to three classifier-agents to determine the class of the object. Each agent has a weight. Each weight is multiplied with the estimated classification probability evaluated from the agent, and then the result is summed and normalized to estimate the percentage of trustiness of classification.”), and
controls the lighting-emitting device to operate in a first object recognition mode or a
second object recognition mode based on a result of the determination (See at least Page 506 Col 2 Para 2 “2) LED-Array Headlamp - … the LED-array headlamp does not need moving elements to generate different light distributions. Instead, the light sources are addressed directly. The light distributions are generated by creating an image of a matrix of LED-chips. The possibility of individually controlling each LED-chip of the matrix allows the generation of different shapes of light. Activating or deactivating single LED-chips of the matrix can easily realize assisting light functions; for example a glare-free high-beam function could be generated by switching off one or several of the LED-chips that illuminate the area of the oncoming traffic. Using a PWM (Pulse Width Modulation) principal to drive the LED-chips makes it possible to produce different levels of brightness, which enable us to adjust the light intensity according to the road illumination. Activating single chips that contribute to the light distribution above the cut-off line could be used to realize a marking light function. Figure 10 depicts a LED-array prototype as well as a low-beam and high-beam light distribution generated with this array. The possibility of separately controlling the brightness of each LED-chip of the matrix allows the generation of driver-specific light distributions.”, discloses adjusting the light intensity according to the road illumination which is construed as controlling the light source unit to recognize the objects existing in the space in a first object recognition mode or a second object recognition mode based on a result of the determination).
Also, Brodsky teaches recognizing the objects existing in the space in a first object recognition mode or a second object recognition mode based on a result of the determination (See at least Para [0032] “… Thereafter, combinations of headlight patterns can be associated with each vehicle using further conventional pattern matching techniques, including, for example, rules that are based on consistency of movement among patterns, to pair patterns corresponding to a vehicle, as well as rules that are based on the distance between such consistently moving patterns, to distinguish among multiple vehicles traveling at the same rate of speed …”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the teachings of Shadeed with the teachings of Brodsky and include the feature of recognizing the objects existing in the space in a first object recognition mode or a second object recognition mode based on a result of the determination, thereby providing distinguishment while detecting objects in order to facilitate light irradiation accordingly which will increase efficiency and reliability (See at least Para [0009] “It is a further object of this invention to facilitate the augmentation of existing video-based traffic monitoring systems to support day and night discrete vehicle identification and tracking.”, Para [0032] “… Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns...”, Para [0033] “As is evident from FIG. 2B, however, the high-intensity reflections are often indistinguishable from headlights, and further processing is provided to improve the reliability of the vehicle identification process.”).
Regarding Claim 26, Shadeed teaches all the elements of claim 16.
However, Shadeed does not explicitly spell out the robot of claim 16, wherein the controller varies a threshold of an object class probability based on the types of the plurality of objects in the first object recognition mode.
Brodsky teaches the robot of claim 16, wherein the controller varies a threshold of an object class probability based on the types of the plurality of objects in the first object recognition mode (See at least Para [0030] “Track 310 in FIG. 3A illustrates a typical track 310 of a reflection pattern; the end 311 of the track 310 occurring when the reflection is insufficient to exceed a given threshold value. Track 320 in FIG. 3B, on the other hand, illustrates the track of an illumination pattern that exhibits a relatively continuous pattern, having an intensity above the given threshold value for most of the field of view of the camera…”, Para [0032] “At 510, an image is received, and at 520, the light patterns within the image are identified. As noted above, thresholding techniques may be used to identify only those light patterns that exceed a given threshold. Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns…”, Para [0025] “Thresholding is a technique that is commonly used to reduce the effects caused by the transient illumination of objects, as illustrated in FIGS. 2A-2B. In FIG. 2B, pixels in the image of FIG. 2A that have a luminance level above a given threshold are given a white value, and pixels having a luminance level below the threshold are given a black value…”, Para [0032] “At 510, an image is received, and at 520, the light patterns within the image are identified. As noted above, thresholding techniques may be used to identify only those light patterns that exceed a given threshold. Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns. Thereafter, combinations of headlight patterns can be associated with each vehicle using further conventional pattern matching techniques, including, for example, rules that are based on consistency of movement among patterns, to pair patterns corresponding to a vehicle, as well as rules that are based on the distance between such consistently moving patterns, to distinguish among multiple vehicles traveling at the same rate of speed.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the teachings of Shadeed with the teachings of Brodsky and include the feature of varying a threshold of an object class probability based on the types of the plurality of objects in the first object recognition mode, thereby providing distinguishment while detecting objects in order to facilitate light irradiation accordingly which will increase efficiency and reliability (See at least Para [0009] “It is a further object of this invention to facilitate the augmentation of existing video-based traffic monitoring systems to support day and night discrete vehicle identification and tracking.”, Para [0032] “… Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns...”, Para [0033] “As is evident from FIG. 2B, however, the high-intensity reflections are often indistinguishable from headlights, and further processing is provided to improve the reliability of the vehicle identification process.”).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Shadeed et al (H. Shadeed, J. Wallaschek and S. Mojrzisch, "On Intelligent Adaptive Vehicle Front-Lighting Assistance Systems," 2007 IEEE Intelligent Transportation Systems Conference, Bellevue, WA, USA, 2007, pp. 503-507) (Hereinafter Shadeed) in view of Brodsky (US 20050058323 A1), and further in view of Goel et al. (US 12434709 B1) (Hereinafter Goel) Siegwart et al. (Roland Siegwart; Illah Reza Nourbakhsh; Davide Scaramuzza, "Perception," in Introduction to Autonomous Mobile Robots , MIT Press, 2011, pp.101-263.) (Hereinafter Siegwart).
Regarding Claim 11, modified Shadeed teaches all the elements of claim 1.
Shadeed does not explicitly spell out the robot of claim 1, wherein the controller is configured to:
control the sensor to sense a motion of the one or more objects,
predict a future area where the one or more objects is to be located after a predetermined time,
control the lighting-emitting device to emit light so as to track the motion of the one or more objects so as to enable the sensor to continue to recognize the one or more obiects based the object class specific probability of recognition being higher than or equal to the preset value.
Goel teaches the robot of claim 1, wherein the controller is configured to:
control the sensor to sense a motion of the one or more objects (See at least Col 12 Lines 4-26 “The perception component 322 may include functionality to perform object detection, segmentation, and/or classification. In some examples, the perception component 322 and/or the machine learning component 332 may provide processed sensor data that indicates a presence of an entity that is proximate to the vehicle 302 and/or a classification of the entity as an entity type (e.g., car, pedestrian, cyclist, building, tree, road surface, curb, sidewalk, unknown, etc.). In additional and/or alternative examples, the perception component 322 and/or the machine learning component 332 may provide processed sensor data that indicates one or more characteristics associated with a detected entity and/or the environment in which the entity is positioned. In some examples, characteristics associated with an entity may include, but are not limited to, an x-position (global position), a y-position (global position), a z-position (global position), an orientation, an entity type (e.g., a classification), a velocity of the entity, an extent of the entity (size), etc. Characteristics associated with the environment may include, but are not limited to, a presence of another entity in the environment, a state of another entity in the environment, a time of day, a day of a week, a season, a weather condition, an indication of darkness/light, etc.”),
predict a future area where the one or more objects is to be located after a predetermined time (See at least Col 13 Lines 11-13 “The prediction component 324 may generate one or more probability maps representing prediction probabilities of possible locations of one or more objects in an environment…”), …
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Shadeed with the teachings of Goel and include the feature of the controller sensing a motion of the object existing in the space through the sensing unit and predicts an area where the object is to be located after a predetermined time based on the sensed motion, thereby taking into account the future positions of obstacles into the calculation making adjustment to precise, accurate, and safe movement of the robot (See at least Col 4 Lines 60-66 “The techniques described herein may improve the functioning of a computing device by providing a robust method of determining changes in environmental conditions associated with environments in which a vehicle is operating, and adjusting vehicle states, adjusting model outputs, and/or selecting model outputs to adjust for changes in environmental conditions. ”).
Siegwart teaches control the lighting-emitting device to emit light so as to track the motion of the one or more objects so as to enable the sensor to continue to recognize the one or more obiects based the object class specific probability of recognition being higher than or equal to the preset value (See at least Page 213 Para 5 “Localization accuracy: the detected features should be accurately localized, both in image position and scale. Accuracy is especially important in camera calibration, 3D reconstruction from images (“structure from motion”), and panorama stitching.”, Page 244 Para 3 “Furthermore, we assume that each random variable is subject to a Gaussian probability density curve, with a mean at the true value and with some specified variance:”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Shadeed with the teachings of Siegwart and include the feature of controlling the lighting-emitting device to emit light so as to track the motion of the one or more objects so as to enable the sensor to continue to recognize the one or more obiects based the object class specific probability of recognition being higher than or equal to the preset value, thereby precisely recognizing one or more objects (See at least Page 143 Para 4 “Only over time, as the underlying perfor mance of imaging chips improves, will significantly more robust vision sensors for mobile robots be available”).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Shadeed et al (H. Shadeed, J. Wallaschek and S. Mojrzisch, "On Intelligent Adaptive Vehicle Front-Lighting Assistance Systems," 2007 IEEE Intelligent Transportation Systems Conference, Bellevue, WA, USA, 2007, pp. 503-507) (Hereinafter Shadeed) in view of Brodsky (US 20050058323 A1), and further in view of Mehta et al. (US 20200207375 A1) (Hereinafter Mehta).
Regarding Claim 13, Shadeed teaches all the elements of claim 1.
However, Shadeed does not explicitly spell out the robot of claim 1, wherein the controller increases the preset value of the object class specific probability of recognition when the robot is in a stationary state, and decreases the preset value of the object class specific probability of recognition when the robot moves.
Mehta teaches the robot of claim 1, wherein the controller increases the preset value of the object class specific probability of recognition when the robot is in a stationary state, and decreases the preset value of the object class specific probability of recognition when the robot moves (See at least Para [0134] “… In some non-limiting embodiments or aspects, the cost associated with the cost function increases and/or decreases based on autonomous vehicle 104 deviating from a motion plan (e.g., a selected motion plan, an optimized motion plan, a preferred motion plan, etc.). For example, the cost associated with the cost function increases and/or decreases based on autonomous vehicle 104 deviating from the motion plan to avoid a collision with an object.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the teachings of Shadeed with the teachings of Mehta and include the feature of the controller increasing the preset value when the robot is in a stationary state without movement while decreasing the preset value when the robot moves, thereby provide accurate calculation for object recognition and classification which will lead to improved and safe robot navigation.
Claim(s) 17, 18, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Shadeed et al (H. Shadeed, J. Wallaschek and S. Mojrzisch, "On Intelligent Adaptive Vehicle Front-Lighting Assistance Systems," 2007 IEEE Intelligent Transportation Systems Conference, Bellevue, WA, USA, 2007, pp. 503-507) (Hereinafter Shadeed) in view of Brodsky (US 20050058323 A1), and further in view of (Taveira et al.) (US 20190202449 A1) (Hereinafter Taveira).
Regarding Claim 17, Shadeed teaches all the elements of claim 16.
However, Shadeed does not explicitly spell out the robot of claim 16, wherein the controller
recognizes the plurality of objects existing the space in the first object recognition mode when the plurality of objects existing in the space are of different types, and
recognizes the plurality of objects existing the space in the second object recognition mode when the plurality of objects existing in the space are of the same type.
Taveira teaches the robot of claim 16, wherein the controller recognizes the plurality of objects existing the space in the first object recognition mode when the plurality of objects existing in the space are of different types (See at least “Claim 13 … determine the classification of the object in the vicinity of the robotic vehicle by determining whether the object is animate object or inanimate object; and adjust the proximity threshold setting in the collision avoidance system based on the classification of the object by increasing the proximity threshold in response to the classification of the object being animate or decreasing the proximity threshold in response to the classification of the object being inanimate.”), and
recognizes the plurality of objects existing the space in the second object recognition mode when the plurality of objects existing in the space are of the same type (See at least “Claim 13 … determine the classification of the object in the vicinity of the robotic vehicle by determining whether the object is animate object or inanimate object; and adjust the proximity threshold setting in the collision avoidance system based on the classification of the object by increasing the proximity threshold in response to the classification of the object being animate or decreasing the proximity threshold in response to the classification of the object being inanimate.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Shadeed with the teachings of Taveira and include the feature of the recognizing the plurality of objects existing the space in the first object recognition mode when the plurality of objects existing in the space are of the same type or different types, thereby provide accurate calculation for object recognition and classification which will lead to improved and safe robot navigation (See at least Para [0002] “…controlling, the robotic vehicle using the adjusted proximity threshold for collision avoidance.”).
Regarding Claim 18, Shadeed teaches all the elements of claim 17. Shadeed further teaches the robot of claim 17, wherein the first object recognition is a mode of extracting object class probabilities for each type, and recognizing the objects based on the object class probabilities extracted for each type (See at least Page 505 Col 1 Para 1 “Our system does the same via detecting the light distributions of the oncoming/leading vehicles and classifying it in real-time using neural network and fuzzy logic supported with a prototypical database. For the reason that we are dealing with an opening environment like vehicle's space; neither one technique nor one set of hypotheses is applicable to be used to detect different types of objects. Therefore we developed our system based on separating objects in three categories Taillamps, Headlamps and Lane markings; which are probably appear mostly in the traffic situations and may be considered as the relevant targets to our system. The algorithm is designed for parallel processing in order to be suitable to be implemented later on FPGA. Figure 5 shows the main structure of the algorithms. The system capture a colour frame from the image sensor and save it in three different formats, colour, grey, and binary image. Each frame-type is associated to a fast one pass customised filter, which is designed mainly to detect one type of objects at a time. After successful detection, the data are feed to three classifier-agents to determine the class of the object. Each agent has a weight. Each weight is multiplied with the estimated classification probability evaluated from the agent, and then the result is summed and normalized to estimate the percentage of trustiness of classification.”).
Regarding Claim 19, Shadeed teaches all the elements of claim 18. Shadeed further teaches the robot of claim 18, wherein the controller, in the first object recognition mode, extracts the object class probabilities for each type and controls the lighting-emitting device so that an average of the extracted object class probabilities (See at least Page 505 Col 1 Para 1 “Our system does the same via detecting the light distributions of the oncoming/leading vehicles and classifying it in real-time using neural network and fuzzy logic supported with a prototypical database. For the reason that we are dealing with an opening environment like vehicle's space; neither one technique nor one set of hypotheses is applicable to be used to detect different types of objects. Therefore we developed our system based on separating objects in three categories Taillamps, Headlamps and Lane markings; which are probably appear mostly in the traffic situations and may be considered as the relevant targets to our system. The algorithm is designed for parallel processing in order to be suitable to be implemented later on FPGA. Figure 5 shows the main structure of the algorithms. The system capture a colour frame from the image sensor and save it in three different formats, colour, grey, and binary image. Each frame-type is associated to a fast one pass customised filter, which is designed mainly to detect one type of objects at a time. After successful detection, the data are feed to three classifier-agents to determine the class of the object. Each agent has a weight. Each weight is multiplied with the estimated classification probability evaluated from the agent, and then the result is summed and normalized to estimate the percentage of trustiness of classification.”)…
However, Shadeed does not explicitly spell out … object class probabilities for each class exceeds
type-specific threshold.
Brodsky teaches … object class probabilities for each class exceeds type-specific threshold (See
at least Para [0030] “Track 310 in FIG. 3A illustrates a typical track 310 of a reflection pattern; the end 311 of the track 310 occurring when the reflection is insufficient to exceed a given threshold value. Track 320 in FIG. 3B, on the other hand, illustrates the track of an illumination pattern that exhibits a relatively continuous pattern, having an intensity above the given threshold value for most of the field of view of the camera…”, Para [0032] “At 510, an image is received, and at 520, the light patterns within the image are identified. As noted above, thresholding techniques may be used to identify only those light patterns that exceed a given threshold. Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns…”, Para [0025] “Thresholding is a technique that is commonly used to reduce the effects caused by the transient illumination of objects, as illustrated in FIGS. 2A-2B. In FIG. 2B, pixels in the image of FIG. 2A that have a luminance level above a given threshold are given a white value, and pixels having a luminance level below the threshold are given a black value…”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Shadeed with the teachings of Brodsky and include the feature of object class probabilities for each class exceeds type-specific threshold, thereby providing optimal adaptive light irradiation for object recognition which will increase performance accuracy, efficiency and reliability of the robot (See at least Para [0009] “It is a further object of this invention to facilitate the augmentation of existing video-based traffic monitoring systems to support day and night discrete vehicle identification and tracking.”, Para [0032] “… Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns...”, Para [0033] “As is evident from FIG. 2B, however, the high-intensity reflections are often indistinguishable from headlights, and further processing is provided to improve the reliability of the vehicle identification process.”).
Regarding Claim 20, Shadeed teaches all the elements of claim 19. Shadeed further teaches the robot of claim 19, wherein the controller, when entering the first object recognition mode, controls the emitted light so that an average of the (See at least Page 503 Col 1 “A. Safety and Lighting Technology - … New developments like the Adaptive Front-Lighting System (AFS) can improve the illumination of the road in front of the vehicle by offering the driver an optimal light pattern in nearly every situation.”, Page 506 Col 2 Para 2 “2) LED-Array Headlamp - … the LED-array headlamp does not need moving elements to generate different light distributions. Instead, the light sources are addressed directly. The light distributions are generated by creating an image of a matrix of LED-chips. The possibility of individually controlling each LED-chip of the matrix allows the generation of different shapes of light. Activating or deactivating single LED-chips of the matrix can easily realize assisting light functions; for example a glare-free high-beam function could be generated by switching off one or several of the LED-chips that illuminate the area of the oncoming traffic. Using a PWM (Pulse Width Modulation) principal to drive the LED-chips makes it possible to produce different levels of brightness, which enable us to adjust the light intensity according to the road illumination. Activating single chips that contribute to the light distribution above the cut-off line could be used to realize a marking light function. Figure 10 depicts a LED-array prototype as well as a low-beam and high-beam light distribution generated with this array. The possibility of separately controlling the brightness of each LED-chip of the matrix allows the generation of driver-specific light distributions.”, Page 505 Col 1 Para 1 “Our system does the same via detecting the light distributions of the oncoming/leading vehicles and classifying it in real-time using neural network and fuzzy logic supported with a prototypical database. For the reason that we are dealing with an opening environment like vehicle's space; neither one technique nor one set of hypotheses is applicable to be used to detect different types of objects. Therefore we developed our system based on separating objects in three categories Taillamps, Headlamps and Lane markings; which are probably appear mostly in the traffic situations and may be considered as the relevant targets to our system. The algorithm is designed for parallel processing in order to be suitable to be implemented later on FPGA. Figure 5 shows the main structure of the algorithms. The system capture a colour frame from the image sensor and save it in three different formats, colour, grey, and binary image. Each frame-type is associated to a fast one pass customised filter, which is designed mainly to detect one type of objects at a time. After successful detection, the data are feed to three classifier-agents to determine the class of the object. Each agent has a weight. Each weight is multiplied with the estimated classification probability evaluated from the agent, and then the result is summed and normalized to estimate the percentage of trustiness of classification.”)…
However, Shadeed does not explicitly spell out … object class probabilities for each type
exceeds the corresponding type-specific threshold.
Brodsky teaches … object class probabilities for each type exceeds the corresponding type-
specific threshold (See at least Para [0030] “Track 310 in FIG. 3A illustrates a typical track 310 of a reflection pattern; the end 311 of the track 310 occurring when the reflection is insufficient to exceed a given threshold value. Track 320 in FIG. 3B, on the other hand, illustrates the track of an illumination pattern that exhibits a relatively continuous pattern, having an intensity above the given threshold value for most of the field of view of the camera…”, Para [0032] “At 510, an image is received, and at 520, the light patterns within the image are identified. As noted above, thresholding techniques may be used to identify only those light patterns that exceed a given threshold. Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns…”, Para [0025] “Thresholding is a technique that is commonly used to reduce the effects caused by the transient illumination of objects, as illustrated in FIGS. 2A-2B. In FIG. 2B, pixels in the image of FIG. 2A that have a luminance level above a given threshold are given a white value, and pixels having a luminance level below the threshold are given a black value…”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Shadeed with the teachings of Brodsky and include the feature of object class probabilities for each type exceeds the corresponding type-
specific threshold, thereby providing optimal adaptive light irradiation for object recognition which will increase performance accuracy, efficiency and reliability of the robot (See at least Para [0009] “It is a further object of this invention to facilitate the augmentation of existing video-based traffic monitoring systems to support day and night discrete vehicle identification and tracking.”, Para [0032] “… Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns...”, Para [0033] “As is evident from FIG. 2B, however, the high-intensity reflections are often indistinguishable from headlights, and further processing is provided to improve the reliability of the vehicle identification process.”).
Claim(s) 21, 22, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Shadeed et al (H. Shadeed, J. Wallaschek and S. Mojrzisch, "On Intelligent Adaptive Vehicle Front-Lighting Assistance Systems," 2007 IEEE Intelligent Transportation Systems Conference, Bellevue, WA, USA, 2007, pp. 503-507) (Hereinafter Shadeed) in view of Brodsky (US 20050058323 A1), (Taveira et al.) (US 20190202449 A1) (Hereinafter Taveira), and further in view of Wang et al. ( CN-110674166-A) (Hereinafter Wang).
Regarding Claim 21, modified Shadeed teaches all the elements of claim 17.
However, Shadeed does not explicitly spell out the robot of claim 17, wherein the second object recognition mode is a mode of recognizing objects based on class probabilities for the plurality of objects of the same type.
Wang teaches the robot of claim 17, wherein the second object recognition mode is a mode of
recognizing objects based on class probabilities for the plurality of objects of the same type (See at least Page 14 Para 2 “The processing module 305 is configured to determine a category of the evaluation object according to the determined score of the evaluation parameter of the evaluation object and a mapping relationship table between the classification category of the evaluation object and a parameter scoring rule, and evaluate the evaluation object of the same category”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Shadeed with the teachings of Wang and include the feature of recognition mode recognizing objects based on class probabilities for the plurality of objects of the same type, thereby performing precise calculation of the environment for accurate, safe, and fast robot movement (See at least Page 16 Para 7 “… the resources for processing data can be reduced and the processing speed can be improved.”).
Regarding Claim 22, Shadeed teaches all the elements of claim 17. Shadeed further teaches the robot of claim 21, wherein the controller, in the second object recognition mode, extracts a class probability for each of a plurality of objects and controls the lighting-emitting device so that an average of the (See at least Page 503 Col 1 “A. Safety and Lighting Technology - … New developments like the Adaptive Front-Lighting System (AFS) can improve the illumination of the road in front of the vehicle by offering the driver an optimal light pattern in nearly every situation.”, Page 506 Col 2 Para 2 “2) LED-Array Headlamp - … the LED-array headlamp does not need moving elements to generate different light distributions. Instead, the light sources are addressed directly. The light distributions are generated by creating an image of a matrix of LED-chips. The possibility of individually controlling each LED-chip of the matrix allows the generation of different shapes of light. Activating or deactivating single LED-chips of the matrix can easily realize assisting light functions; for example a glare-free high-beam function could be generated by switching off one or several of the LED-chips that illuminate the area of the oncoming traffic. Using a PWM (Pulse Width Modulation) principal to drive the LED-chips makes it possible to produce different levels of brightness, which enable us to adjust the light intensity according to the road illumination. Activating single chips that contribute to the light distribution above the cut-off line could be used to realize a marking light function. Figure 10 depicts a LED-array prototype as well as a low-beam and high-beam light distribution generated with this array. The possibility of separately controlling the brightness of each LED-chip of the matrix allows the generation of driver-specific light distributions.”, Page 505 Col 1 Para 1 “Our system does the same via detecting the light distributions of the oncoming/leading vehicles and classifying it in real-time using neural network and fuzzy logic supported with a prototypical database. For the reason that we are dealing with an opening environment like vehicle's space; neither one technique nor one set of hypotheses is applicable to be used to detect different types of objects. Therefore we developed our system based on separating objects in three categories Taillamps, Headlamps and Lane markings; which are probably appear mostly in the traffic situations and may be considered as the relevant targets to our system. The algorithm is designed for parallel processing in order to be suitable to be implemented later on FPGA. Figure 5 shows the main structure of the algorithms. The system capture a colour frame from the image sensor and save it in three different formats, colour, grey, and binary image. Each frame-type is associated to a fast one pass customised filter, which is designed mainly to detect one type of objects at a time. After successful detection, the data are feed to three classifier-agents to determine the class of the object. Each agent has a weight. Each weight is multiplied with the estimated classification probability evaluated from the agent, and then the result is summed and normalized to estimate the percentage of trustiness of classification.”)…
However, Shadeed does not explicitly spell out … probability for each of a plurality of objects
exceeds a type specific threshold.
Brodsky teaches … probability for each of a plurality of objects exceeds a type specific threshold
(See at least Para [0030] “Track 310 in FIG. 3A illustrates a typical track 310 of a reflection pattern; the end 311 of the track 310 occurring when the reflection is insufficient to exceed a given threshold value. Track 320 in FIG. 3B, on the other hand, illustrates the track of an illumination pattern that exhibits a relatively continuous pattern, having an intensity above the given threshold value for most of the field of view of the camera…”, Para [0032] “At 510, an image is received, and at 520, the light patterns within the image are identified. As noted above, thresholding techniques may be used to identify only those light patterns that exceed a given threshold. Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns…”, Para [0025] “Thresholding is a technique that is commonly used to reduce the effects caused by the transient illumination of objects, as illustrated in FIGS. 2A-2B. In FIG. 2B, pixels in the image of FIG. 2A that have a luminance level above a given threshold are given a white value, and pixels having a luminance level below the threshold are given a black value…”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Shadeed with the teachings of Brodsky and include the feature of class probabilities exceeding a threshold, thereby providing optimal adaptive light irradiation for object recognition which will increase performance accuracy, efficiency and reliability of the robot (See at least Para [0009] “It is a further object of this invention to facilitate the augmentation of existing video-based traffic monitoring systems to support day and night discrete vehicle identification and tracking.”, Para [0032] “… Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns...”, Para [0033] “As is evident from FIG. 2B, however, the high-intensity reflections are often indistinguishable from headlights, and further processing is provided to improve the reliability of the vehicle identification process.”).
Regarding Claim 23, Shadeed teaches all the elements of claim 22. Shadeed further teaches the robot of claim 22, wherein the controller, when entering the second object recognition mode, controls at least some of the plurality of light-emitting elements to emit light, to determine a light pattern that causes the average of the object class probabilities of the plurality of objects (See at least Page 503 Col 1 “A. Safety and Lighting Technology - … New developments like the Adaptive Front-Lighting System (AFS) can improve the illumination of the road in front of the vehicle by offering the driver an optimal light pattern in nearly every situation.”, Page 506 Col 2 Para 2 “2) LED-Array Headlamp - … the LED-array headlamp does not need moving elements to generate different light distributions. Instead, the light sources are addressed directly. The light distributions are generated by creating an image of a matrix of LED-chips. The possibility of individually controlling each LED-chip of the matrix allows the generation of different shapes of light. Activating or deactivating single LED-chips of the matrix can easily realize assisting light functions; for example a glare-free high-beam function could be generated by switching off one or several of the LED-chips that illuminate the area of the oncoming traffic. Using a PWM (Pulse Width Modulation) principal to drive the LED-chips makes it possible to produce different levels of brightness, which enable us to adjust the light intensity according to the road illumination. Activating single chips that contribute to the light distribution above the cut-off line could be used to realize a marking light function. Figure 10 depicts a LED-array prototype as well as a low-beam and high-beam light distribution generated with this array. The possibility of separately controlling the brightness of each LED-chip of the matrix allows the generation of driver-specific light distributions.”, Page 505 Col 1 Para 1 “Our system does the same via detecting the light distributions of the oncoming/leading vehicles and classifying it in real-time using neural network and fuzzy logic supported with a prototypical database. For the reason that we are dealing with an opening environment like vehicle's space; neither one technique nor one set of hypotheses is applicable to be used to detect different types of objects. Therefore we developed our system based on separating objects in three categories Taillamps, Headlamps and Lane markings; which are probably appear mostly in the traffic situations and may be considered as the relevant targets to our system. The algorithm is designed for parallel processing in order to be suitable to be implemented later on FPGA. Figure 5 shows the main structure of the algorithms. The system capture a colour frame from the image sensor and save it in three different formats, colour, grey, and binary image. Each frame-type is associated to a fast one pass customised filter, which is designed mainly to detect one type of objects at a time. After successful detection, the data are feed to three classifier-agents to determine the class of the object. Each agent has a weight. Each weight is multiplied with the estimated classification probability evaluated from the agent, and then the result is summed and normalized to estimate the percentage of trustiness of classification.”)…
However, Shadeed does not explicitly spell out … object class probabilities of the plurality of
objects to exceed the type-specific threshold.
Brodsky teaches … object class probabilities of the plurality of objects to exceed the type-
specific threshold (See at least Para [0030] “Track 310 in FIG. 3A illustrates a typical track 310 of a reflection pattern; the end 311 of the track 310 occurring when the reflection is insufficient to exceed a given threshold value. Track 320 in FIG. 3B, on the other hand, illustrates the track of an illumination pattern that exhibits a relatively continuous pattern, having an intensity above the given threshold value for most of the field of view of the camera…”, Para [0032] “At 510, an image is received, and at 520, the light patterns within the image are identified. As noted above, thresholding techniques may be used to identify only those light patterns that exceed a given threshold. Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns…”, Para [0025] “Thresholding is a technique that is commonly used to reduce the effects caused by the transient illumination of objects, as illustrated in FIGS. 2A-2B. In FIG. 2B, pixels in the image of FIG. 2A that have a luminance level above a given threshold are given a white value, and pixels having a luminance level below the threshold are given a black value…”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Shadeed with the teachings of Brodsky and include the feature of class probabilities exceeding a type-specific threshold, thereby providing optimal adaptive light irradiation for object recognition which will increase performance accuracy, efficiency and reliability of the robot (See at least Para [0009] “It is a further object of this invention to facilitate the augmentation of existing video-based traffic monitoring systems to support day and night discrete vehicle identification and tracking.”, Para [0032] “… Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns...”, Para [0033] “As is evident from FIG. 2B, however, the high-intensity reflections are often indistinguishable from headlights, and further processing is provided to improve the reliability of the vehicle identification process.”).
Claim(s) 24 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Shadeed et al (H. Shadeed, J. Wallaschek and S. Mojrzisch, "On Intelligent Adaptive Vehicle Front-Lighting Assistance Systems," 2007 IEEE Intelligent Transportation Systems Conference, Bellevue, WA, USA, 2007, pp. 503-507) (Hereinafter Shadeed) in view of Brodsky (US 20050058323 A1), (Taveira et al.) (US 20190202449 A1) (Hereinafter Taveira), and further in view of Fukuda et al. (JP 2000047296 A) (Hereinafter Fukuda).
Regarding Claim 24, Shadeed teaches all the elements of claim 19.
However, Shadeed does not explicitly spell out the robot of claim 19, wherein the controller, in the first object recognition mode, controls the lighting-emitting device based on an object class probability of a type with a highest priority, with respect to different types of objects.
Fukuda teaches the robot of claim 19, wherein the controller, in the first object recognition mode, controls the lighting-emitting device based on an object class probability of a type with a highest priority, with respect to different types of objects (See at least Para [0006] “According to another aspect of the present invention, there is provided an image recognition apparatus, which comprises a camera for capturing an image of an object and a recognition means for recognizing the object based on image data acquired by the camera. An illumination unit that irradiates the object with light, a storage unit that stores a plurality of illumination conditions of the illumination unit in association with a use priority, and a highest priority among the plurality of illumination conditions. Illumination control means for controlling the illumination means based on the illumination conditions, and when a recognition error by the recognition means occurs, the illumination conditions are changed based on the priority order to cause the camera to re-capture an image…”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Shadeed with the teachings of Fukuda and include the feature of the controller in the first object recognition mode controls the lighting-emitting device based on an object class probability of a type with a highest priority, with respect to different types of objects, thereby provide optimal control of the lighting-emitting device in order for precise object recognition and fast robot operation (See at least Para [0005] “Therefore, an object of the present invention is to provide an image recognition apparatus and an image recognition method which are excellent in operability and can improve the operation rate of the apparatus.”).
Regarding Claim 25, Shadeed teaches all the elements of claim 24.
However, Shadeed does not explicitly spell out the robot of claim 24, wherein the controller
controls the light-emitting device to emit a light pattern that causes the object class probability of the type with the highest priority exceeds the type-specific threshold.
Fukuda teaches the robot of claim 24, wherein the controller controls the light-emitting device to emit a light pattern that causes the object class probability of the type with the highest priority (See at least Para [0006] “According to another aspect of the present invention, there is provided an image recognition apparatus, which comprises a camera for capturing an image of an object and a recognition means for recognizing the object based on image data acquired by the camera. An illumination unit that irradiates the object with light, a storage unit that stores a plurality of illumination conditions of the illumination unit in association with a use priority, and a highest priority among the plurality of illumination conditions. Illumination control means for controlling the illumination means based on the illumination conditions, and when a recognition error by the recognition means occurs, the illumination conditions are changed based on the priority order to cause the camera to re-capture an image…”)…
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Shadeed with the teachings of Fukuda and include the feature of the controller controls the lighting-emitting device to emit a light pattern that causes the object class probability of the type with the highest priority, thereby provide optimal control of the lighting-emitting device in order for precise object recognition and fast robot operation (See at least Para [0005] “Therefore, an object of the present invention is to provide an image recognition apparatus and an image recognition method which are excellent in operability and can improve the operation rate of the apparatus.”).
Brodsky teaches … object class probability of the type with the highest priority exceeds the type-specific threshold (See at least Para [0030] “Track 310 in FIG. 3A illustrates a typical track 310 of a reflection pattern; the end 311 of the track 310 occurring when the reflection is insufficient to exceed a given threshold value. Track 320 in FIG. 3B, on the other hand, illustrates the track of an illumination pattern that exhibits a relatively continuous pattern, having an intensity above the given threshold value for most of the field of view of the camera…”, Para [0032] “At 510, an image is received, and at 520, the light patterns within the image are identified. As noted above, thresholding techniques may be used to identify only those light patterns that exceed a given threshold. Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns…”, Para [0025] “Thresholding is a technique that is commonly used to reduce the effects caused by the transient illumination of objects, as illustrated in FIGS. 2A-2B. In FIG. 2B, pixels in the image of FIG. 2A that have a luminance level above a given threshold are given a white value, and pixels having a luminance level below the threshold are given a black value…”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Shadeed with the teachings of Brodsky and include the feature of object class probability of the type with the highest priority exceeds the type-specific threshold, thereby providing optimal adaptive light irradiation for object recognition which will increase performance accuracy, efficiency and reliability of the robot (See at least Para [0009] “It is a further object of this invention to facilitate the augmentation of existing video-based traffic monitoring systems to support day and night discrete vehicle identification and tracking.”, Para [0032] “… Pattern matching techniques can also be applied to distinguish headlight patterns, such as recognizing characteristic sizes and shapes of headlight patterns, to distinguish headlights from other vehicle lights as well as from reflections, to further improve the reliability of vehicle identification based on headlight patterns...”, Para [0033] “As is evident from FIG. 2B, however, the high-intensity reflections are often indistinguishable from headlights, and further processing is provided to improve the reliability of the vehicle identification process.”).
Claim(s) 27 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Shadeed et al (H. Shadeed, J. Wallaschek and S. Mojrzisch, "On Intelligent Adaptive Vehicle Front-Lighting Assistance Systems," 2007 IEEE Intelligent Transportation Systems Conference, Bellevue, WA, USA, 2007, pp. 503-507) (Hereinafter Shadeed) in view of Brodsky (US 20050058323 A1), and further in view of Liang et al. (US 20220327314 A1) (Hereinafter Liang).
Regarding Claim 27, Shadeed teaches all the elements of claim 26.
However, Shadeed does not explicitly spell out the robot of claim 26, wherein the controller sets the threshold for recognizing the objects to a first threshold, and when a preset type of object is included in the plurality of objects, sets the threshold to a second threshold higher than the first threshold.
Liang teaches the robot of claim 26, wherein the controller sets the threshold for recognizing the objects to a first threshold, and when a preset type of object is included in the plurality of objects, sets the threshold to a second threshold higher than the first threshold (See at least Para [0036] “In a particular example, the video analytics engine 102 may be used to perform object recognition using any suitable technique, and assign a confidence score when determining whether, or not an object in an image 112 comprises a given object. Such a confidence score may be compared to an object recognition confidence threshold to determine whether, or not, the object in an image 112 comprises the given object. In one particular example, the camera 104 may be monitoring an indoor location where vehicles are not “normally” located; as such, an object recognition confidence threshold for detecting a vehicle at such an indoor location may be set relatively high for detecting a vehicle, but relatively low for detecting humans.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Shadeed with the teachings of Liang and include the feature of controller setting the threshold for recognizing the objects to a first threshold, and when a preset type of object is included in the plurality of objects, sets the threshold to a second threshold higher than the first threshold, thereby providing option of illuminating light according to specific situation which will help detect object more precisely creating an accurate, efficient, and safe robot movement (See at least Para [0014] “…Regardless, such pruning may generally cause the video analytics engine to operate more efficiently as the number of video analytics parameters used to analyze the images by the video analytics engine is reduced.”).
Regarding Claim 28, modified Shadeed teaches all the elements of claim 27. Shadeed further teaches the robot of claim 27, wherein the controller, when the preset type of object is included in the plurality of objects, controls the light-emitting device to irradiate a light pattern (See at least Page 503 Col 1 “A. Safety and Lighting Technology - … New developments like the Adaptive Front-Lighting System (AFS) can improve the illumination of the road in front of the vehicle by offering the driver an optimal light pattern in nearly every situation.”, Page 506 Col 2 Para 2 “2) LED-Array Headlamp - … the LED-array headlamp does not need moving elements to generate different light distributions. Instead, the light sources are addressed directly. The light distributions are generated by creating an image of a matrix of LED-chips. The possibility of individually controlling each LED-chip of the matrix allows the generation of different shapes of light. Activating or deactivating single LED-chips of the matrix can easily realize assisting light functions; for example a glare-free high-beam function could be generated by switching off one or several of the LED-chips that illuminate the area of the oncoming traffic. Using a PWM (Pulse Width Modulation) principal to drive the LED-chips makes it possible to produce different levels of brightness, which enable us to adjust the light intensity according to the road illumination. Activating single chips that contribute to the light distribution above the cut-off line could be used to realize a marking light function. Figure 10 depicts a LED-array prototype as well as a low-beam and high-beam light distribution generated with this array. The possibility of separately controlling the brightness of each LED-chip of the matrix allows the generation of driver-specific light distributions.”, Page 505 Col 1 Para 1 “Our system does the same via detecting the light distributions of the oncoming/leading vehicles and classifying it in real-time using neural network and fuzzy logic supported with a prototypical database. For the reason that we are dealing with an opening environment like vehicle's space; neither one technique nor one set of hypotheses is applicable to be used to detect different types of objects. Therefore we developed our system based on separating objects in three categories Taillamps, Headlamps and Lane markings; which are probably appear mostly in the traffic situations and may be considered as the relevant targets to our system. The algorithm is designed for parallel processing in order to be suitable to be implemented later on FPGA. Figure 5 shows the main structure of the algorithms. The system capture a colour frame from the image sensor and save it in three different formats, colour, grey, and binary image. Each frame-type is associated to a fast one pass customised filter, which is designed mainly to detect one type of objects at a time. After successful detection, the data are feed to three classifier-agents to determine the class of the object. Each agent has a weight. Each weight is multiplied with the estimated classification probability evaluated from the agent, and then the result is summed and normalized to estimate the percentage of trustiness of classification.”)…
However, Shadeed does not explicitly spell out … that causes the object class probability is higher than the second threshold.
Liang teaches … that causes the object class probability is higher than the second threshold (See at least Para [0036] “In a particular example, the video analytics engine 102 may be used to perform object recognition using any suitable technique, and assign a confidence score when determining whether, or not an object in an image 112 comprises a given object. Such a confidence score may be compared to an object recognition confidence threshold to determine whether, or not, the object in an image 112 comprises the given object. In one particular example, the camera 104 may be monitoring an indoor location where vehicles are not “normally” located; as such, an object recognition confidence threshold for detecting a vehicle at such an indoor location may be set relatively high for detecting a vehicle, but relatively low for detecting humans.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Shadeed with the teachings of Liang and include the feature of causing the object class probability is higher than the second threshold, thereby providing option of illuminating light according to specific situation which will help detect object more precisely creating an accurate, efficient, and safe robot movement (See at least Para [0014] “…Regardless, such pruning may generally cause the video analytics engine to operate more efficiently as the number of video analytics parameters used to analyze the images by the video analytics engine is reduced.”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Kim et al. (US 20100185328 A) teaches a robot that supplies a projector service according to a
user's context and a controlling method thereof. The robot includes a user detection unit detecting a user; a user recognition unit recognizing the user; an object recognition unit recognizing an object near the user; a position perception unit perceiving relative positions of the object and the user; a context awareness unit perceiving the user's context based on information on the user, the object and the relative positions between the user and the object; and a projector supplying a projector service corresponding to the user's context.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAHEDA HOQUE whose telephone number is (571)270-5310. The examiner can normally be reached Monday-Friday 8:00 am- 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached at 571-270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHAHEDA HOQUE/ Examiner, Art Unit 3658
/Ramon A. Mercado/Supervisory Patent Examiner, Art Unit 3658