Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 3 and 12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 3 and 12 recite “when the movable element meets the predefined condition”, which lacks antecedence. It is unclear if the element “the predefined condition” is meant to refer to the “preset condition” of claim 1, or if it is meant to refer to a new element. For examination purposes, the limitation will be interpreted as “when the movable element meets the preset condition”.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 5-6, 10-11, 14-15, and 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Stockman et al. (US 20210097547 A1), (hereinafter Stockman).
Regarding claim 1, Stockman teaches a method for guiding palm verification performed by a terminal device, and the method comprising:
displaying a guidance interface for palm verification, the guidance interface including graphical guidance information and the graphical guidance information comprising a movable element in a moving region, and a display location of the movable element in the moving region indicating a current distance between a palm and a detection device (Stockman, “Upon requesting to be enrolled in the user-recognition system, the user-recognition device may, with permission and/or upon explicit request by the user, begin collecting various types of biometric data, and/or other data, for the user. For example, the user-recognition device may include one or more imaging sensors (e.g., a camera) that begins capturing image data (e.g., an individual image, a sequence of images, a video, etc.) of at least a portion of the user, such as a palm of the user, a face of the user, or the like.”, pg. 2, paragraph 0022, lines 1-9, “As described above, the user-recognition device may request that the user move their hand to different locations, angles, and/or orientations as the user-recognition device captures the image data. In some instances, the user-recognition device may provide one or more user interfaces that help instruct the user to the move the hand to the different locations, angles, and/or orientations. For example, the user-recognition device may display a user interface (referred to as a “ first user interface ”) that includes instructions to place the user's hand over the imaging component of the user-recognition device... While displaying the first user interface, the user recognition device may detect the user's hand located over the imaging component. In some examples , the user-recognition device may detect the hand using a distance sensor. In other examples , the user-recognition device may detect the hand using the one or more imaging sensors . In either example, based on detecting the user's hand, the user recognition device may display a user interface (referred to as a “ second user interface ”) that provides instructions for placing the hand at a target location over the imaging component. As described herein, the target location over the imaging component may include both a target vertical location (e.g., Z-direction) with respect to the imaging component and a target horizontal location (e.g., X-direction and y-direction) with respect to the imaging component.”, pgs. 2 and 3, paragraphs 0029-0031, see Figs. 2A-2F A graphical representation is presented to user to guide a palm to a correct position for verification. Figures 2 illustrates this graphic including a first graphical element 204 indicating a target location/size for placing the palm and a second graphical element 206 indicating the palms current location/size.);
dynamically adjusting the display location of the movable element in the moving region on the guidance interface in response to a change of the current distance between the palm and the detection device (Stockman, “The location of the user's hand may include a vertical location with respect to the imaging component and a horizontal location with respect to the imaging component. In some examples, the user-recognition device may detect the location of the user's hand at set time intervals. For instance, the user-recognition device may detect the location of the user's hand every millisecond, second, and/or the like… The user-recognition device may then update the second graphical element based on the detected locations of the user's hand. For example, the user-recognition device may update the size of the second graphical element based on the vertical locations of the user's hand.”, pg. 3, paragraphs 0033-0034, As the user places their palm over the imaging device, the location and size of the second graphical element 206 is updated based on the real-time position and distance of the user’s palm, allowing the system to guides users to the target location and size of the first graphical element 204.); and
displaying first prompt information for indicating starting of verification or completion of verification when the display location of the movable element in the moving region meets a preset condition (Stockman, “The user-recognition device may continue to provide the instructions to the user until the user-recognition device detects that the hand of the user is proximate to the target location. In some examples, the user-recognition device may detect the hand is proximate to the target location based on determining that the vertical location of the hand is within (e.g., less than) a threshold distance to the target vertical location and the horizontal location of the hand is within a threshold distance to the target horizontal location.”, pg. 4, paragraph 0039, lines 1-10, “In some instances, after the user-recognition device detects that the user's hand is located proximate to the target location, the user-recognition device may display a user interface (referred to as a " third user interface ”) that indicates that the hand is at the correct location. Additionally, in instances where the user has already enrolled with the user-recognition system , the user-recognition system may perform the processes described herein to identify the user profile associated with the user. In instances where the user has yet to enroll with the user-recognition system, the user-recognition device may provide one or more additional user interfaces for receiving additional information for enrolling with the user-recognition system.”, pg. 4, paragraph 0040, Once the palm meets a target, such as within a threshold distance and position, verification or additional enrollment is performed.).
Regarding claim 2, Stockman teaches the method according to claim 1, wherein the preset condition is met when the display location of the movable element in the moving region matches a display location of a marked element in the moving region, the display location of the marked element in the moving region indicating a target distance or a target distance range between the palm and the detection device (Stockman, “The first graphical element 204 may indicate the targe location for placing the hand over the user-recognition device 104. For instance, the first graphical element 204 may indicate both the target vertical location and the target horizontal location over the user-recognition device 104. In some instances, and as shown in the example of FIGS. 2A-2F, the first graphical element 204 includes a circle with dashed lines.”, pg. 6, paragraph 0060, lines 6-13, see Figs. 1B and 2A-2F, Figure 1B, step 116 illustrates the palm meeting a target position and distance defined by the first graphical element 204.).
Regarding claim 5, Stockman teaches the method according to claim 1, wherein the method further comprises: displaying, on the guidance interface, second prompt information for guiding a user to perform an operation in accordance with the current distance between the palm and the detection device (Stockman, “For a third example, if the horizontal location of the user's hand is to the front of the target horizontal location for the imaging component, the second user interface may include an instruction indicating that the user needs to move the hand "BACK". For a fourth example, if the horizontal location of the user's hand is to the back of the target horizontal location for the imaging component, the second user interface may include an instruction indicating that the user needs to move the hand "FORWARD".”, pg. 4, paragraph 0036, lines 11-20).
Regarding claim 6, Stockman teaches the method according to claim 1, wherein a display screen of the guidance interface does not overlap a palm detection plane of the detection device (Stockman, “In the example of FIG. 3B, the user-recognition device 104 may provide a user interface 310 that includes an instruction 312 associated with hovering the palm of the user over the user-recognition device 104 to pay for a transaction. Additionally, the user interface 310 includes an image 314 of a user placing a palm over the user-recognition device 104. In other words, the image 314 provides a representation of how the user should place the palm over the user-recognition device 104. In some instances, the user-recognition device 104 may display the user interface 310 right before detecting the palm of the user., pgs. 5 and 6, paragraph 0053, see Figs. 1A-1B, A display screen is configured in combination with the imaging device for capturing a user’s palm and providing guidance. The palm is detected based on distance and imaging sensors to facilitates contact-free palm detection.).
Claim 10 corresponds to claim 1, additionally reciting a computer device comprising a processor and a memory to execute the method according to claim 1. Stockman teaches the addition of a computer device comprising a processor and a memory to execute the method according to claim 1 (Stockman, “For example, the user-recognition device 104 may comprise one or more processors 620 configured to power components of the user-recognition device 104 and may further include memory 622 which stores components that are at least partially executable by the processor(s) 620, as well as other data 662.”, pg. 9, paragraph 0087, lines 9-15). As indicated in the analysis of claim 1, Stockman teaches all the limitation according to claim 1. Therefore, claim 10 is rejected for the same reason as claim 1.
Claim 11 corresponds to claim 2, additionally reciting a computer device comprising a processor and a memory to execute the method according to claim 2. Stockman teaches the addition of a computer device comprising a processor and a memory to execute the method according to claim 2 (see analysis of claim 10). As indicated in the analysis of claim 2, Stockman teaches all the limitation according to claim 2. Therefore, claim 11 is rejected for the same reason as claim 2.
Claim 14 corresponds to claim 5, additionally reciting a computer device comprising a processor and a memory to execute the method according to claim 5. Stockman teaches the addition of a computer device comprising a processor and a memory to execute the method according to claim 5 (see analysis of claim 10). As indicated in the analysis of claim 5, Stockman teaches all the limitation according to claim 5. Therefore, claim 14 is rejected for the same reason as claim 5.
Claim 15 corresponds to claim 6, additionally reciting a computer device comprising a processor and a memory to execute the method according to claim 6. Stockman teaches the addition of a computer device comprising a processor and a memory to execute the method according to claim 6 (see analysis of claim 10). As indicated in the analysis of claim 6, Stockman teaches all the limitation according to claim 6. Therefore, claim 15 is rejected for the same reason as claim 6.
Claim 19 corresponds to claim 1, additionally reciting a non-transitory computer-readable storage medium storing a computer program to execute the method according to claim 1. Stockman teaches the addition of a non-transitory computer-readable storage medium storing a computer program to execute the method according to claim 1 (Stockman, “As shown in FIG. 9, the user-recognition device 104 includes one or more memories 622. The memory 622 comprises one or more computer-readable storage media (CRSM).”, pg. 12, paragraph 0116, lines 1-4). As indicated in the analysis of claim 1, Stockman teaches all the limitation according to claim 1. Therefore, claim 19 is rejected for the same reason as claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3-4 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Stockman et al. (US 20210097547 A1) in view of Weiss (US 20160292524 A1).
Regarding claim 3, Stockman teaches the method according to claim 1, wherein the method further comprises: capturing an image of the palm by using a camera of the detection device when the movable element meets the predefined condition (Stockman, “After receiving the request to enroll from the user 606, the front-end enrollment component 632 may, at 646, begin generating image data 658 using one or more imaging component(s) 626 (e.g., cameras)… In some instances, while obtaining the image data 658, the user interface component 634 may cause the user-recognition device 104 to provide instructions for how to place the hand of the user 606. Once the front-end enrollment component 632 has obtained the image data 658 representing the palm or other portion of the user 606, the user-recognition device 104 may send (e.g., upload, stream, etc.) the image data 658 to the server(s) 608 over one or more networks 660 using one or more communication interfaces 624.”, pg. 10, paragraph 90, Once the palm’s position and distance meets the predefined thresholds applied during user guidance, image data is obtained and sent for further enrollment processing.).
Stockman does not teach recognizing and verifying the palm in the image based on a plurality of images of the palm that are captured at different distances to obtain a verification result.
However, Weiss teaches recognizing and verifying the palm in the image based on a plurality of images of the palm that are captured at different distances to obtain a verification result (Weiss, “The candidate person is instructed by system 100 to present a candidate body part to camera 105 So as to capture candidate images 40 of the candidate body part (step 1115). The captured candidate images are presented (step 1117) superimposed on the selected enrollment scale. 1201. In step 1119, the candidate aligns one of the candidate images with selected enrollment scale 1201. In decision block 1121, if there is an alignment between candidate image 40 and selected scale 1201, then candidate image 40 may be verified or not verified as an authentic image of the candidate person as the previously enrolled person in step 1123… During the enrollment processes shown above in FIGS. 13, 14 and 15, there may be no knowledge by mobile computer system 100 of the hand details (size etc) of a person to be enrolled. Therefore, in the enrollment stage, several graticule scales 1201 which have respective graticule lines 1203 may be displayed on display 109 and the person aligns their hand to each scale 1201. Hands can be aligned to scales 1201 where the whole hand should be placed inside a rectangular box of scale 1201. Referring to FIG. 13, when the person aligns their hand on display 109 to each of the scales 1201 during enrollment, as a result, the hand may be actually placed at different distances to camera 105 for each of the scales 1201. Mobile computer system 100 may select the best scale 1201 for the user where the features extracted from enrollment image 20 related to corresponding scale 1201, are the most robust and distinct. From this point on the best selected scale 1201 may be used for the person and an enrollment image saved and used during verification.”, pg. 5, paragraphs 0083-0087, see Figs. 11 and 13, During enrollment, multiple target scales are presented to a user. The user aligns their palm with each scale, resulting in images captured at various distances. These images are then analyzed to determine the best scale and its corresponding image for verification.).
Stockman teaches displaying a guidance interface for palm verification, including a graphic corresponding to the user’s position and a target graphic to guide the user to collect a single image of the palm for verification (Stockman, pg. 10, paragraph 90, see Figs. 1A and 1B). Stockman does not teach recognizing or verifying the palm based on capturing multiple images at different distances to obtain a verification result. Weiss teaches guiding a user for palm verification by displaying multiple graphic scales for user alignment. This results in images captured at different distances, and from those images the optimal image is selected for enrollment and verification (see above). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the palm verification of Stockman to include the multi-graphic alignment for image selection as taught by Weiss (Weiss, pg. 5, paragraphs 0083-0087). The motivation for doing so would have been to select verification images with the most robust and distinct features, thereby improving accuracy of the palm verification (as suggested by Weiss, “Mobile computer system 100 may select the best scale 1201 for the user where the features extracted from enrollment image 20 related to corresponding scale 1201, are the most robust and distinct. From this point on the best selected scale 1201 may be used for the person and an enrollment image saved and used during verification.”, pg. 5, paragraph 0087, lines 4-10). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Stockman in view of Weiss to obtain the invention as specified in claim 3.
Regarding claim 4, Stockman teaches the method according to claim 1, wherein capturing an image of the palm by using a camera of the detection device; and recognizing and verifying the palm based on the image of the palm to obtain a verification result (Stockman, “After receiving the request to enroll from the user 606, the front-end enrollment component 632 may, at 646, begin generating image data 658 using one or more imaging component(s) 626 (e.g., cameras)… In some instances, while obtaining the image data 658, the user interface component 634 may cause the user-recognition device 104 to provide instructions for how to place the hand of the user 606. Once the front-end enrollment component 632 has obtained the image data 658 representing the palm or other portion of the user 606, the user-recognition device 104 may send (e.g., upload, stream, etc.) the image data 658 to the server(s) 608 over one or more networks 660 using one or more communication interfaces 624.”, pg. 10, paragraph 90).
Stockman does not teach wherein there are a plurality of marked elements, and different marked elements correspond to different target distances or target distance ranges; and the displaying first prompt information for indicating starting of verification or completion of verification when the display location of the movable element in the moving region meets a preset condition comprises: displaying first prompt information for indicating starting of verification when the display location of the movable element in the moving region matches the display locations of the plurality of marked elements in the moving region in a target order.
However, Weiss teaches wherein there are a plurality of marked elements, and different marked elements correspond to different target distances or target distance ranges; and the displaying first prompt information for indicating starting of verification or completion of verification when the display location of the movable element in the moving region meets a preset condition comprises: displaying first prompt information for indicating starting of verification when the display location of the movable element in the moving region matches the display locations of the plurality of marked elements in the moving region in a target order (Weiss, “During the enrollment processes shown above in FIGS. 13, 14 and 15, there may be no knowledge by mobile computer system 100 of the hand details (size etc) of a person to be enrolled. Therefore, in the enrollment stage, several graticule scales 1201 which have respective graticule lines 1203 may be displayed on display 109 and the person aligns their hand to each scale 1201. Hands can be aligned to scales 1201 where the whole hand should be placed inside a rectangular box of scale 1201.”, pg. 5, paragraph 0086, “The process of verification may be repeated in a specific way. For example, during enrollment the user selects one of scales 1201a, 1201b, 1201c and aligns her hand to scale 1201. If verification is successful, the user continues to a second verification step with a different scale and so on. For a more secure option, the user during enrollment may combine scales 1201 in sequential verification steps and hence create a password from the ordered sequence of scales 1201.”, pg. 6, paragraph 0090, lines 1-8, Multiple graphic scales are presented to a user for guiding palm verification. Images are collected for each scale aligned by the user. This includes allowing a user to specify a password or sequence of scales that must be collected as part of verification.).
Stockman teaches displaying a guidance interface for palm verification, including a graphic corresponding to the user’s position and a target graphic to guide the user to collect an image of the palm for verification (Stockman, pg. 10, paragraph 90, see Figs. 1A and 1B). Stockman teaches displaying prompt information for indicating the start of verification once the user is aligned with the target graphic (Stockman, pg. 4, paragraph 0040) but does not teach displaying multiple target graphics or displaying prompt information in response to a user matching the target graphics in a target order. Weiss teaches displaying multiple graphic scales for alignment by a user and collecting images of each of the scales in a target order for palm verification (see above). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the palm verification of Stockman to include the ordered multi-graphic alignment as taught by Weiss (Weiss, pg. 6, pg. 5, paragraph 0086, paragraph 0090, lines 1-8). The motivation for doing so would have been include additional passcode protection for the user, thereby reducing the risk of imposters and improving the security of the palm verification (as suggested by Weiss, “The combination of steps are saved in the enrollment phase and at each verification the user follows the same verification steps. Hence an imposter cannot predict the combination of finger placements and selections of squares in the order performed during enrollment.”, pg. 6, paragraph 0090, lines 13-18). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Stockman in view of Weiss to obtain the invention as specified in claim 4.
Claim 12 corresponds to claim 3, additionally reciting a computer device comprising a processor and a memory to execute the method according to claim 6. Stockman in view of Weiss teaches the addition of a computer device comprising a processor and a memory to execute the method according to claim 3 (see analysis of claim 10). As indicated in the analysis of claim 3, Stockman in view of Weiss teaches all the limitation according to claim 3. Therefore, claim 12 is rejected for the same reason as claim 3.
Claim 13 corresponds to claim 4, additionally reciting a computer device comprising a processor and a memory to execute the method according to claim 4. Stockman in view of Weiss teaches the addition of a computer device comprising a processor and a memory to execute the method according to claim 4 (see analysis of claim 10). As indicated in the analysis of claim 4, Stockman in view of Weiss teaches all the limitation according to claim 4. Therefore, claim 12 is rejected for the same reason as claim 4.
Claims 8, 17 and 20 is rejected under 35 U.S.C. 103 as being unpatentable over Stockman et al. (US 20210097547 A1) in view of Matsunaga et al. (JP 2020188372 A), (hereinafter Matsunaga).
Regarding claim 8, Stockman teaches the method according to claim 1, wherein the method further comprises: capturing a current frame of image of the palm by using a camera of the detection device; performing palm detection on the current frame of image to obtain predicted coordinates and a predicted size that correspond to the palm (Stockman, “The user-recognition device may then update the second graphical element based on the detected locations of the user's hand. For example, the user-recognition device may update the size of the second graphical element based on the vertical locations of the user's hand. For instance, if the vertical location of the user's hand is proximate to the target vertical location for the imaging component (e.g., eight-five millimeters above the imaging component), the user-recognition device may cause the size of the second graphical element to match the size the first graphical element… The user-recognition device may also update the position of the second graphical element based on the horizontal locations of the user's hand. For instance, if the horizontal location of the user's hand is proximate to the target horizontal location for the imaging component (e.g., near the middle of the imaging component), the user-recognition device may cause the second graphical element to be centered within the first graphical element.”, pg. 3, paragraphs 0034-0035, The palm’s size and location are detected across image frames to dynamically update the users guidance graphic.).
Stockman does not teach determining a target distance sensor from a plurality of distance sensors corresponding to the detection device based on the predicted coordinates and the predicted size; and obtaining the current distance between the palm and the detection device based on distance information corresponding to the target distance sensor.
However, Matsunaga teaches determining a target distance sensor from a plurality of distance sensors corresponding to the detection device based on the predicted coordinates and the predicted size; and obtaining the current distance between the palm and the detection device based on distance information corresponding to the target distance sensor (Matsunaga, “The plurality of sensors 30 are provided on the rear bumper 5 at intervals in the vehicle width direction X. In the present embodiment, the plurality of sensors 30 are six known sensors 31 to have a function of measuring a linear distance from a rear obstacle W (see FIG. 2) in the periphery of the own vehicle 1. It is composed of 36. Each of the six sensors 31 to 36 is set with a detection area B capable of detecting the distance to the obstacle W according to the time until the transmitted ultrasonic wave is reflected by the obstacle W and returns.”, pg. 6, lines 18-23, “When at least one of the six sensors 31 to 36 detects an obstacle W while the back camera 21 is activated, the electronic control unit 60 derives a measured value D of the distance to the obstacle W by the calculation unit 63. To do. Further, the electronic control unit 60 determines the target sensor having the smallest measured value D of the distance to the obstacle W among the six sensors 31 to 36 by the determination unit 64. Then, the electronic control unit 60 controls the drive unit 40 so that the detection area B of the target sensor is included in the shooting range C of the back camera 21 according to the cooperative control of the six sensors 31 to 36 and the back camera 21.”, pg. 8, lines 6-19, Various distance sensors are configured with a camera for object detection. A distance to an object for each sensor is measured and the sensor with the smallest measured distance is designated as the target sensor. The cameras viewpoint is then adjusted according to the target sensors detection area.).
Stockman teaches displaying a guidance interface for palm verification, including updating a guidance graphic by detecting the position and distance of a user’s palm using image and distance sensors (Stockman, pg. 3, paragraphs 0030 and 0034-0035). Stockman does not teach selecting a target distance sensor from a plurality of distance sensors. Matsunaga teaches selecting a target sensor, which has a smallest distance to an object, from an array of sensors (see above). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the palm detection of Stockman to include target distance sensor selection as taught by Matsunaga (Matsunaga, pg. 8, lines 6-19). The motivation for doing so would have been to improve the accuracy of palm detection by ensuring the system relies on the most relevant sensor reading. The combination of Stockman in view of Matsunaga would determine a target sensor based on the palm’s position and distance, adjust the image data accordingly, and then update the guidance graphic based on that adjusted data. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine the teachings of Stockman in view of Matsunaga to obtain the invention as specified in claim 8.
Claim 17 corresponds to claim 8, additionally reciting a computer device comprising a processor and a memory to execute the method according to claim 8. Stockman in view of Matsunaga teaches the addition of a computer device comprising a processor and a memory to execute the method according to claim 8 (see analysis of claim 10). As indicated in the analysis of claim 8, Stockman in view of Matsunaga teaches all the limitation according to claim 8. Therefore, claim 17 is rejected for the same reason as claim 8.
Claim 20 corresponds to claim 8, additionally reciting a non-transitory computer-readable storage medium storing a computer program to execute the method according to claim 8. Stockman in view of Matsunaga teaches the addition of a non-transitory computer-readable storage medium storing a computer program to execute the method according to claim 8 (see analysis of claim 19). As indicated in the analysis of claim 8, Stockman in view of Matsunaga teaches all the limitation according to claim 8. Therefore, claim 20 is rejected for the same reason as claim 8.
Allowable Subject Matter
Claims 7, 9, 16 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CONNOR LEVI HANSEN whose telephone number is (703)756-5533. The examiner can normally be reached Monday-Friday 9:00-5:00 (ET).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CONNOR L HANSEN/Examiner, Art Unit 2672
/SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672