DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), of which papers have been placed in the file wrapper.
Claim Objections
Claim 1-9 are objected to because of the following informalities: Throughout all the claims, they are missing words such as “wherein” at the beginning of limitations that would make it easier to read and understand. Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over
Panneer Selvam et al. (US Pub. No. 2022/0157143 A1) in view of Jabara (US Pub. No. 2020/0137569 A1).
Regarding claim 1, Panneer discloses, a blockchain and human characteristics intelligence recognition-based appointment-based elderly care system:
including a cloud server, (See Panneer ¶136, “For example, the information regarding the user, smartwatch, and environment may be stored in a smart phone, server cloud system/network.”)
a smartwatch (See Panneer ¶35, “The illustrative embodiments provide a system, method, device, smartwatch, smart band, or wearable for monitoring a user.”)
and a management terminal, with the cloud server linked to the smartwatch and the management terminal, (See Panneer ¶45, “The illustrative embodiments may be accessible to the user, caregiver, family, physicians, and others. The illustrative embodiments may provide remote patient monitoring. … The illustrative embodiments may provide a cloud-based, predictive analytics platform to enhance user/patient care.”
Further see Panneer ¶136, “For example, the information regarding the user, smartwatch, and environment may be stored in a smart phone, server cloud system/network, or other device, system, equipment, or component.”)
the smartwatch worn by the elderly user, (See Panneer ¶86, “FIG. 4 is a pictorial representation of users wearing a smartwatch in accordance with an illustrative embodiment. Various users 402 may utilize the smartwatch 400 including a toddler 404, an adult 406, and an elderly user 408.”)
and the management terminal including a PC terminal, a mobile terminal and a display screen; (See Panneer ¶95, “As a result, the wireless device (i.e. smart phone, tablet, laptop, home computer, designated hub, etc.) may be utilized to store, display, compile, synchronize, and compile data for the smartwatch 500.”)
a smartwatch includes a central processing module, (See Panneer ¶92, “The processor may represent any number of microprocessors, digital signal processors (DSP).”)
a video capture module, (See Panneer ¶69, “The smartwatch 100 may include any number of cameras including the camera 104.” Further see Panneer ¶146, “In one embodiment, the smartwatch may record audio and/or video of the user.”)
a voice module, (See Panneer ¶76, “The microphone 112 it is a component for converting soundwaves into an electrical signal which may be amplified, recorded/saved, or communicated. The microphone 112 may receive voice commands from the user.”)
a communication module, (See Panneer ¶110, “The smartwatch 500 may also include one or more transceivers 520. The transceivers 520 are components including both a transmitter and receiver which may be combined and share common circuitry on a single housing.”)
a power module (See Panneer ¶99, “In one embodiment, the physical interface 516 is a magnetic interface that utilizes the charging pins (e.g., pogo pins) to couple to an interface of a power system.”)
and a touch control module, (See Panneer ¶61, “The display 102 may be a touch display for providing information and receiving input from the user.”)
the central processing module being electrically connected to the video capture module, the voice module, the communication module, the power module and the touch control module, (See Panneer ¶88, “FIG. 5 is a pictorial representation of a block diagram of a smartwatch 500 in accordance with an illustrative embodiment. The smartwatch 500 may include any number of operatively connected components including a battery 508, a logic engine 510, a memory 510, a user interface 514, physical interface 516, sensors 518, and transceivers 520.”)
the central processing module being connected to the cloud server via the communication module; (See Panneer ¶119, “In one embodiment, the learning module 528 may communicate health information to one or more cloud networks or systems” Further see Panneer ¶136, “For example, the information regarding the user, smartwatch, and environment may be stored in a smart phone, server cloud system/network.”)
the touch control module reading the touch operation commands on the display of the smartwatch; (See Panneer ¶96, “As previously noted, the user interface 514 may include one or more touch displays for displaying and receiving information and selections from the user.”)
a management terminal is used by service staff, supervisors, family members and customer service to facilitate understanding of the current situation of the elderly, establish mutual communication channels and push timely information on possible dangers. (See Panneer ¶98, “The user interface 514 may utilize any number of screens, windows, or presented information to interface through … an external device (e.g., laptop, desktop computer, tablet, etc.). … Access to vital readings, such as baseline, acceptable, and historic readings, may be quickly accessed and reviewed. … Secure messaging may be performed between the user, care teams, medical professionals, guardians, caregivers, and others. In one embodiment, the user interface 514 may provide a single central platform that may be managed by administrators for multiple users. Access to information (e.g., daily vitals, health data, activity data, etc.), secure communications, and alerts may be allowed based on permissions, settings, and other applicable information.”)
a cloud server parses the data transmitted by the smartwatch and the management terminal, (See Panneer ¶136, “For example, the information regarding the user, smartwatch, and environment may be stored in a smart phone, server cloud system/network.” Further see Panneer ¶33, “The system includes a server configured to communicate through one or more networks. The server receives inputs for at least height, weight, and age of a user. The server determines a body mass index of the user utilizing the height and weight of the user, assign values for activity level, body mass index, hydration information, and the age of the user.”)
Panneer discloses the above limitations but he fails to disclose the following limitation: and uses blockchain technology for data storage and execution of the corresponding commands for the data.
However, Jabara discloses, and uses blockchain technology for data storage and execution of the corresponding commands for the data; (See Jabara ¶69, “As illustrated in FIG. 7, each block contains the data associated with each user. In this embodiment, the secure database 124 may be implemented and distributed over one or more servers 170 that may be part of a cloud computing environment 172. As those skilled in the art will appreciate, a Blockchain database is typically distributed over a large number of servers 170 that each contain an identical copy of the encrypted database.” Further see Jabara ¶70, “The UE (user equipment) 132 can access the centralized secure database 124 through a licensed network, such as a cellular network embodied by the base station 112, core network 116, and gateway 118.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the storing of user data using blockchain and executing database access commands as suggested by Jabara to Paneer’s obtaining of monitored user data from using a smartwatch. This can be done using known engineering techniques, with a reasonable expectation of success. The motivation for doing so is because blockchain using a decentralized ledger, which ensures that the monitoring data of the user cannot be easily tampered with, deleted, or manipulated by a single individual.
Claims 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable over
Panneer Selvam et al. (US Pub. No. 2022/0157143 A1) in view of Jabara (US Pub. No. 2020/0137569 A1) in view of Goswami et al. (US Pub. No. 2019/0238568 A1) in view of Ramaswamy (US Pat. No. 9,224,060 B1) in view of Zhai et al. (US Pub. No. 2021/0004570 A1) in view of Khan et al. (US Pub. No. 2023/0206700 A1) and in further view of Lei et al. (“Face Recognition Using LBP Eigenfaces”).
Regarding claim 2, Paneer and Jabara disclose, the blockchain and human characteristics intelligence recognition-based appointment-based elderly care system of claim 1, the central processing module matches the face image information captured by the video capture module, (See Paneer ¶147, “The smartwatch may also receive a touch pattern (step 1208). …The touch pattern may also represent a specific fingerprint or body part activation or scan (e.g., nose print, iris scan, facial recognition). For example, step 1208 may include one or more captured images of the user.”)
Panneer and Jabara disclose facial recognition using a captured image from the smartphone but he fails to disclose the limitations for steps 1.1) and 1.2) of this claim.
However, Goswami discloses, specifically comprising the steps of: 1.1) data acquisition step: the video acquisition module acquires the original image of the wearer's human face features; (See Goswami ¶89, “For purposes of the present description, it will be assumed that the image capture device 620 is a digital camera that captures facial images of individuals for which the image processing system 600 is used to perform facial recognition services.”)
the acquired original image is de-noised, (See Goswami ¶59, “The illustrative embodiments may also apply a median filter, e.g., a median filter of size 5×5, for denoising the image before extracting the features.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the denoising an image using a median filter as a preprocessing step as suggested by Goswami to Paneer and Jabara’s facial recognition. This can be done using known engineering techniques, with a reasonable expectation of success. The motivation for doing so is because a median filter excels at removing impulse noise while minimizing the blurring effects often caused by other smoothing filters.
Paneer, Jabra, and Goswami disclose the above limitations but they fail to disclose color correction.
However, Ramaswamy discloses, then color-corrected, (See Ramaswamy 16:42-45, “In various embodiments, image data can be pre-processed to improve object detection and tracking. Pre-processing can include histogram equalization or optimization”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the color correction using histogram equalization as a preprocessing step as suggested by Ramaswamy to Paneer, Jabara, and Goswami’s facial recognition. This can be done using known engineering techniques, with a reasonable expectation of success. The motivation for doing so is because histogram equalization is used to enhance contrast and normalize illumination, which significantly improves facial recognition under varied lighting conditions.
Paneer, Jabara, Goswami, and Ramaswamy disclose the above limitations but they fail to disclose face alignment.
However, Zhai discloses, followed by face alignment, (See Zhai ¶60, “an included angle between a connecting line of the left and right eyes and a horizontal plane is calculated, and the beautiful face image is rotated according to a value of the included angle, so that the face are horizontally aligned to overcome data difference caused by posture deflection.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the face alignment as suggested by Zhai to Paneer, Jabara, Goswami, and Ramaswamy’s facial recognition. This can be done using known engineering techniques, with a reasonable expectation of success. The motivation for doing so is because proper alignment is crucial for maximizing recognition accuracy. It helps in precisely locating facial components, which is essential for extracting discriminative feature that differentiate between individuals.
Paneer, Jabara, Goswami, Ramaswamy, and Zhai disclose the above limitations, but they fail to disclose, and finally cropped to obtain a pure face image.
However, Khan discloses, and finally cropped to obtain a pure face image; (See Khan ¶35, “The alignment module 114 crops and align a face for pre-processing purpose before passing it on to the facial recognition module 118.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the cropping of a facial image as suggested by Khan to Paneer, Jabara, Goswami, Ramaswamy, and Zhai’s facial recognition. This using known engineering techniques, with a reasonable expectation of success. The motivation for doing is that cropping prior to facial recognition is essential to isolate the subject, remove background noise, and normalize data, which significantly improves recognition and speed.
Paneer, Jabara, Goswami, Ramaswamy, Zhai, and Khan disclose the above limitations, but they fail to disclose the limitations of step 1.2).
However Lei discloses, 1.2) feature extraction step: obtain the data set after processing in step 1.1) and use I' denote the ith image of P*Q, and perform LBP on each image to obtain a new image I;BP change the LBP is applied to each image to obtain a new image; the image is stretched into vector form and multiple images are combined into a single image matrix to generate the LBP feature space ;perform mean normalization the images and obtain x; rBP mean value of all images to generate the mean face meanface, and subtract the mean value of all images; if the image dimension is not high at this time, find its eigenvalue and eigenvector by the covariance matrix; if the image dimension is too high at this time, first calculate the XT- X The eigenvalues and eigenvectors of the matrix, due to the left multiplication of XT" X -v=)- v , perform a left multiplication matrix X , we get X -XT" (X - v) =A- (X - v) , which gives that the eigenvalues of X "XT are the eigenvalues of XT- X and the eigenvectors are u =X-v;1.3) operational processing step: sort the feature values in step 1.2) from largest to smallest, and take the first k feature values, and the corresponding first k feature vectors(ui, u2,us..as the LBP Eigenface, at which point each feature vector is a feature face; thus through the new k-dimensional subspace, the original high-dimensional vector can be passed through the low-dimensional (wi, w2, wa... wk) representation of a face; where the eigenvectors are P*Q dimensional vectors, the calculation wk the following equation (See Lei p. 1-2, Section 2 LBP Eigenfaces, which is exactly the same as these limitations.)
PNG
media_image1.png
114
522
media_image1.png
Greyscale
PNG
media_image2.png
904
540
media_image2.png
Greyscale
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the using eigenfaces with LBP (local binary patterns) as suggested by Lei to Paneer, Jabara, Goswami, Ramaswamy, Zhai, and Khan’s facial recognition. This can be done using known engineering techniques, with a reasonable expectation of success. The motivation for doing so is because using eigenfaces in combination with LBP for facial recognition is motivated by the need to balance global structure representation with local, robust feature extraction. While eigenfaces provide a compact representation of the entire face, LBP adds robustness against lighting variations and texture changes, overcoming the limitations of using either technique alone.
Regarding claim 3, Paneer, Jabara, Goswami, Ramaswamy, Zhai, Khan, and Lei disclose, the blockchain and human characteristics intelligence recognition-based appointment-based elderly care system of claim 2:
the denoising process of the image in step 1.1) uses median filtering, where the median filtering is a process of arranging the pixel values in order of size in a convolution frame, selecting the middle pixel value as the filtered pixel value, and cycling through all the pixel values in turn to produce the filtered image; (See Goswami ¶59, “The illustrative embodiments may also apply a median filter, e.g., a median filter of size 5×5, for denoising the image before extracting the features. The median filter can handle many common types of image noises. The size of the median filter determines the strength of the denoising. The size of the median filter depends on the image size and the desired level of denoising.”)
color correction using histogram correction, where more concentrated areas of the histogram are split and stretched, and more dispersed areas are combined and compressed so that the pixels within a range are approximately the same; (See Ramaswamy 16:42-45, “In various embodiments, image data can be pre-processed to improve object detection and tracking. Pre-processing can include histogram equalization or optimization”)
face alignment acquires images containing pure face sizes by key point recognition, transforming the face. (See Zhai ¶60, “With reference to FIGS. 6(a) and 6(b), face horizontal-alignment operation is performed according to the beautiful face prediction key point in the step S122 due to the problems of deflection and tilt of the face in the beautiful face image, for example, beautiful face prediction key points of left and right eyes are used, an included angle between a connecting line of the left and right eyes and a horizontal plane is calculated, and the beautiful face image is rotated according to a value of the included angle, so that the face are horizontally aligned to overcome data difference caused by posture deflection.”)
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over
Panneer Selvam et al. (US Pub. No. 2022/0157143 A1) in view of Jabara (US Pub. No. 2020/0137569 A1) in view of Goswami et al. (US Pub. No. 2019/0238568 A1) in view of Ramaswamy (US Pat. No. 9,224,060 B1) in view of Zhai et al. (US Pub. No. 2021/0004570 A1) in view of Khan et al. (US Pub. No. 2023/0206700 A1) in view of Lei et al. (“Face Recognition Using LBP Eigenfaces”) in view of Luo (US Pub. No. 2024/0015340 A1) in view of Shindo et al. (US Pub. No. 2003/0156199 A1) and in further view of Mysore Siddu et al. (US Pub. No. 2020/0118317 A1).
Regarding claim 4, Paneer, Jabara, Goswami, Ramaswamy, Zhai, Khan, and Lei disclose, the blockchain and human characteristics intelligence recognition-based appointment-based elderly care system of claim 3: face alignment uses face recognition for face keypoint detection to obtain keypoints; (See Zhai ¶60, “With reference to FIGS. 6(a) and 6(b), face horizontal-alignment operation is performed according to the beautiful face prediction key point in the step S122 due to the problems of deflection and tilt of the face in the beautiful face image, for example, beautiful face prediction key points of left and right eyes are used.” Whereby 68 keypoint is an arbitrary design choice.)
then the face is rotated according to the angle between the left and right eye center coordinates and the horizontal direction to align it vertically, after alignment, and the obtained other face coordinates are similarly rotated; (See Zhai ¶60, “an included angle between a connecting line of the left and right eyes and a horizontal plane is calculated, and the beautiful face image is rotated according to a value of the included angle, so that the face are horizontally aligned to overcome data difference caused by posture deflection.”
Paneer, Jabara, Goswami, Ramaswamy, Zhai, Khan, and Lei, disclose the above limitations, but they fail to disclose, obtain 68 keypoints.
However, Lou discloses, obtain 68 keypoints; (See Lou ¶81, “Detection of key points of a face is performed by applying a related algorithm. Generally, a common detection model detects 68 key points. In this model, a chin has 8 feature key points, a nose tip has 30 feature key points, a left eye corner has 36 feature key points, a right eye corner has 45 feature key points, a left mouth corner has 48 feature key points, and a right mouth corner has 54 feature key points.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to extract 68 keypoints from the face as suggested by Lou to Paneer, Jabara, Goswami, Ramaswamy, Zhai, Khan, and Lei’s facial alignment using keypoints. This can be done using known engineering techniques, with a reasonable expectation of success. The motivation for doing so is because the 68 keypoints acts as a standard for mapping key structural components of the face, including eyebrows, eyes, nose mouth, and jawline. This allows for high quality precise alignment.
Paneer, Jabara, Goswami, Ramaswamy, Zhai, Khan, Lei, Zhai, and Lou disclose the above limitations, but they fail to disclose, then the width of the face in the horizontal direction is obtained according to the leftmost and rightmost coordinates of the lower jaw respectively after alignment.
However Shindo discloses, then the width of the face in the horizontal direction is obtained according to the leftmost and rightmost coordinates of the lower jaw respectively after alignment, (See Shindo ¶55, “A face-width detecting section 81 in the image-size detecting section 72 reads out the image data temporarily stored in the image-data memory section 91, detects the coordinates of the rightmost and leftmost portions of the face of the subject, and supplies a difference between the detected coordinates to a judgment section 82.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the obtaining the width of the face as suggested by Shindo to Paneer, Jabara, Goswami, Ramaswamy, Zhai, Khan, Lei, Zhai, and Lou’s cropping of a face prior to facial recognition. This can be done using known engineering techniques, with a reasonable expectation of success. The motivation for doing so is to ensure that horizontal information of the face is obtained, which includes feature such as the mouth and eyes, which are essential for distinguishing individuals.
Paneer, Jabara, Goswami, Ramaswamy, Zhai, Khan, Lei, Zhai, Lou, and Shindo disclose the above limitations, but they fail to disclose, and then the vertical length of the face is obtained from the ratio of the center of the eyes to the center of the mouth.
However, Mysore Siddu discloses, and then the vertical length of the face is obtained from the ratio of the center of the eyes to the center of the mouth. (See Mysore Siddu ¶56, “The predefined (or golden) ratio thus defines the relative proportions of different features of the face. For example, in some embodiments, the distance between the eyes and mouth for a female subject may be assumed to be approximately 36 percent of the length of the face of the female subject.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the obtaining vertical length of face as suggested by Mysore Siddu to Paneer, Jabara, Goswami, Ramaswamy, Zhai, Khan, Lei, Zhai, Lou, and Shindo’s cropping prior to facial recognition. This can be done using known engineering techniques, with a reasonable expectation of success. The motivation for doing so is in order to perform vertical cropping, whereby cropping to the necessary vertical length, the system reduces the number of pixels to be processed, allowing for faster, and more efficient face identification.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over
Panneer Selvam et al. (US Pub. No. 2022/0157143 A1) in view of Jabara (US Pub. No. 2020/0137569 A1) in view of Goswami et al. (US Pub. No. 2019/0238568 A1) in view of Ramaswamy (US Pat. No. 9,224,060 B1) in view of Zhai et al. (US Pub. No. 2021/0004570 A1) in view of Khan et al. (US Pub. No. 2023/0206700 A1) in view of Lei et al. (“Face Recognition Using LBP Eigenfaces”) and in further view of Paul, et al. (“Real-Time Low Resolution Face Recognition Using Local Binary Pattern Histograms, Eigenface, and Fisherface Algorithms”).
Regarding claim 5, Paneer, Jabara, Goswami, Ramaswamy, Zhai, Khan, and Lei disclose, the blockchain and human characteristics intelligence recognition-based appointment-based elderly care system of claim 2, but they fail to disclose, a face is reconstructed by meanface + u * k, i.e. the average face + feature vector * reduced dimensional coordinates for representation.
However Paul discloses, a face is reconstructed by meanface + u * k, i.e. the average face + feature vector * reduced dimensional coordinates for representation. (See p. 17 lines 6-9, “Those m eigenvectors are m number of prototypical facial features. The image can be reconstructed by adding the mean face with those eigenvectors with different proportions (called weighted vectors). The formation of the main image from the eigenvectors is illustrated in Figure 12.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the reconstruction of a face using meanface and weighted eigenvectors as suggested by Paul to Jabara, Goswami, Ramaswamy, Zhai, Khan, and Lei’s use of meanface for facial recognition. This can be done using known engineering techniques, with a reasonable expectation of success. The motivation for doing so is efficient representation, whereby a small subset of top eigenvectors can reconstruct a very accuracy approximation of the original face, dramatically reducing data storage and processing requirements.
Allowable Subject Matter
Claims 6-9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claim 6, the blockchain and human characteristics intelligence recognition-based appointment-based elderly care system of claim 1: the power supply module comprises an output sampling circuit, a transient detection circuit and a fixed on-time generation circuit; output sampling circuitry collects the output voltage and eliminates the error between the steady state output voltage and the reference value by means of a high bandwidth op-amp, sampling the inductor current ripple instead of the output voltage ripple for control so that the output capacitor is selected as a small ESR ceramic capacitor to improve the output ripple; transient detection circuitry for detecting rapid increases in load and improving transient response speed by forcing the main power tube on; the fixed on-time generation circuit accepts control signals to generate the control signals required by the driver circuit; an input voltage sampling circuit is provided at the converter input of the power module to detect the input voltage and serve as the input signal for the fixed on-time generation circuit, so that the system switching frequency remains approximately constant in the steady state when the input voltage changes; a current sampling circuit is provided at the inductor connected to the converter to sample the inductor current ripple information and convert it into a voltage signal to serve as the input signal for the transient detection circuit and the fixed on-time generation circuit input signals for the transient detection circuit and the fixed on-time generation circuit; the output voltage is sampled through a voltage divider network and then adjusted with the reference voltage by a high bandwidth op-amp, and is also used as the input signal for the fixed on-time generation circuit and the transient detection circuit; the transient detection circuit compares the current ripple information with the output voltage information, and when a dramatic increase in load occurs, the modulator is controlled to force the upper tube on until it is detected again; Vcomp > ViL, at which point the modulator is controlled to force the upper tube on until it is redetected Vcomp ViL the modulator is then controlled to turn the upper tube on forcibly until it is detected again, restoring steady-state COT control, thus achieving an approximate single-cycle transient response; where Vcomp is the error amplification signal of the output voltage after the operational amplifier, and ViL is the ripple voltage signal obtained from the inductor current sampling and conversion. (The disclosed prior art of record fails to disclose the limitations of this claim.)
Regarding claims 7-9, these claims are objected to since they depend from objected to claim 6.
Conclusion
Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant’s disclosure.
Tiron et al. (US Pub. No. 2023/0190140 A1) Methods and apparatus provide monitoring of coughing and/or a sleep disordered breathing state of a person. One or more sensors may be configured for non-contact active and/or passive sensing. The processor(s) may extract respiratory effort signal(s) from one or more motion signals generated by active non-contact sensing with the sensor(s). The processor(s) may extract one or more energy band signals from an acoustic audio signal generated by passive non-contact sensing with the sensor(s). The processor(s) may assess the energy band signal(s) and/or the respiratory efforts signal(s) to generate intensity signal(s) representing sleep disorder breathing modulation. The processor(s) may classify feature(s) derived from the one or more intensity signals to generate measure(s) of coughing and/or sleep disordered breathing. The processor may evaluate sensing signal(s) to generate indication(s) of cough event(s) and/or cough type which may include generating an indication of a coronavirus disease or a coronavirus disease cough type.
Matsuoka et al. (US Pub. No. 2019/0200872 A1) A method of optimizing sleep of a subject using smart-home devices may include operating a smart-home system that is configured to operate in a normal mode and a sleep mode. The method may also include determining that the smart-home system should transition into the sleep mode. The smart-home devices may use a set of default parameters when operating in the sleep mode. The method may additionally include monitoring, while in the sleep mode, a sleep cycle of the subject using the smart-home devices. The method may further include detecting behavior of the subject that indicates that the sleep cycle of the subject is being interrupted or about to be interrupted, determining an environmental control that corresponds with the behavior of the subject, and adjusting the environmental control using the smart-home devices to prevent or stop the sleep cycle of the subject from being interrupted.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID PERLMAN whose telephone number is (571) 270-1417.
The examiner can normally be reached on Monday - Friday; 10:00am -6:30pm.
Examiner interviews are available via telephone and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is
(571) 273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call (800) 786-9199 (IN USA OR CANADA) or (571) 272-1000.
/DAVID PERLMAN/Primary Examiner, Art Unit 2673