DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1,2,5-10,13-21 and 23-25 are currently pending and have been examined.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 12/27/2023 and 07/18/2024 have been considered by the examiner and initialed copies of the IDS are hereby attached.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1,2,20,21 and 23-25 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by SHIN et al. (US 20230018686 A1), hereinafter SHIN.
Regarding claim 1, SHIN discloses
A computer implemented fall detection method for determining whether a person is in a fall state (see Fig. 2, smart home device 200, further see paragraph 0005, “In some embodiments, a method for performing fall detection is described.”, further see method steps of Fig. 6), the method comprising:
obtaining a classification of a region within a building that is monitored by an active reflected wave detector (see Fig. 8, step 610 where determining whether a person is present for monitoring in the monitoring region is, “obtaining a classification of a region”, further see paragraph 0081, “At block 610, a determination may be made whether a person is present within the field-of-view of the radar IC. The determination of block 610 can be performed either using radar or some other presence detection technology, such as passive infrared presence detection.”);
configuring a classifier based on the classification (see Fig. 6, applying pre-trained machine learning model, further see paragraph 0088, “At block 650, a feature extraction process may be performed. The feature extraction process may identify some number of features of the tracklet determined at block 645. For instance, as previously detailed 10 or more features may be determined at block 650. In some embodiments, 16 features are determined. At block 655, a pre-trained machine learning model may receive the features identified at block 650. The pre-trained machine learning model may be a random forest machine learning model or some other form of machine learning model, such as a neural network. Based on the features provided to the pre-trained machine learning model, a classification may be performed that indicates whether the features represent a fall or not.”, where fall detection is “based on” determining occupancy as one as fall-detection is otherwise disabled, see support in paragraph 0082, “In some embodiments, at block 610, a determination may also be made if more than one person is present. If more than one person is determined to be present, fall monitoring may be disabled. In such a situation, monitoring may continue to occur to determine when only a single person is present. In some embodiments, fall detection may be enabled when only a single person has been determined to be present since if more than one person is present, help is already available if a person falls.”);
controlling the active reflected wave detector to measure wave reflections from the region within the building to receive measured wave reflection data that is obtained by the active reflected wave detector (see Fig. 6 steps 615 and 620 of emitting radar signal within the monitoring region and receiving the reflected radar data, further see paragraph 0083, “] At block 615, radio waves, or more generally, electromagnetic radiation, may be emitted. Radio waves may be emitted at between 40 GHz and 80 GHz. In some embodiments, radio waves are emitted around 60 GHz. The emitted radio waves may be frequency modulated continuous wave radar. At block 620, reflections of the emitted radio waves may be received off of objects present within the field-of-view of the radar IC. The reflections may be received via multiple antennas of the radar IC (or via antennas separate from the radar processing circuitry).”); and
using the classifier, after said configuring, to determine whether a person is in a fall state using the measured wave reflection data (see Fig. 6, step 660 of determining whether a fall has occurred by a human based on the step of applying pre-trained machine learning model at step 655).
Regarding claim 2, SHIN further discloses
The computer implemented method of claim 1, wherein the region that is monitored by the active reflected wave detector is, or is within, an enclosed space of the building (see paragraph 0021, “Smart-home device 110 may be positioned such that a radar chip of smart-home device 110 has a field-of-view 120 of as much of environment 100 as possible. Smart-home device 110 may be placed on a shelf or some other semi-permanent location from where smart-home device 110 does not need to be frequently moved. For example, the smart-home device 110 may be affixed to a wall or ceiling. Such an arrangement can allow smart-home device 110 to monitor environment 100 as long as smart-home device 110 has power and is not moved. Therefore, monitoring for falls may be performed continuously in the room where smart-home device 110 is located.”).
Regarding claim 20, SHIN further discloses
At least one non-transitory computer-readable storage medium comprising instructions which, when executed by at least one processor cause the at least one processor to perform the method of claim 1 (see Fig. 210, processing module 210 performs the method of claim 1, further see paragraph 0034, “Processing module 210 may include one or more special-purpose or general-purpose processors. Such special-purpose processors may include processors that are specifically designed to perform the functions detailed herein.”).
Regarding claim 21, the same cited section and rationale as claim 1 is applied.
Regarding claim 23, SHIN further discloses
The device of claim 21, wherein the device further comprises the active reflected wave detector (see Fig 1, device 200 includes radar sensor 205).
Regarding claim 24, SHIN further discloses
The device of claim 21, wherein the active reflected wave detector is a radar sensor (see Fig 1, device 200 include radar sensor 205).
Regarding claim, 25 the same cited section and rationale as claim 1 is applied.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 5-7 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over SHIN et al. (US 20230018686 A1) in view of Gillian et al. (US 20170097413 A1), hereinafter Gillian.
Regarding claim 5, SHIN discloses [Note: what SHIN fails to disclose is strike-through]
The computer implemented method of claim 1,
Gillian discloses,
wherein the classification of the region comprises a size classification of the region (see paragraph 0136, “At 1408, a context for the space is determined based on at least the radar features. In some cases, the context for the device is determined based on geometries and occupancies derived from the radar features. For example, the context manager may determine a size of the space, number of other occupants, and distances to those occupants in order to set a privacy bubble around the device. In other cases, a set of landmarks in the radar features is compared to known 3D context models. This can be effective to identify the space in which the device is operating based on a known 3D context model.”).
It would have been obvious to someone with ordinary skill in the art prior to the
effective filing date of the claimed invention to incorporate the features as disclosed by Gillian into the invention of SHIN. Both references are considered analogous arts to the claimed invention as they both disclose the use of machine learning with a radar device where human events in an enclosed space are classified. SHIN discloses the determination of occupancy in a space to enable radar-based fall detection. Gillian discloses a neural network system where contextual information (such a occupancy, size of a room, type of room, etc.) are used to then configure and implement radar-based contextual sensing. The combination of SHIN and Gillian would be obvious with a reasonable expectation of success in order to improve performance of the radar device (see paragraph 0004 of Gillian).
Regarding claim 6, SHIN discloses [Note: what SHIN fails to disclose is strike-through]
The computer implemented method of claim 1,
Gillian discloses,
wherein the classification of the region comprises a functional design classification of the region (see paragraph 0137, “Although described as known, the 3D context models may also be accessed or downloaded to the device, such as based on device location (e.g., GPS). Alternately or additionally, other types of sensor data can be compared with that of known 3D context models. For example, sounds and wireless networks detected by the device can be compared to acoustic and network data of the known 3D context models. Continuing the ongoing example, the context manager 112 of the tablet computer 102-4 determines a context of environment 1500 as “living room,” a private, semi-secure context.”).
It would have been obvious to someone with ordinary skill in the art prior to the
effective filing date of the claimed invention to incorporate the features as disclosed by Gillian into the invention of SHIN. Both references are considered analogous arts to the claimed invention as they both disclose the use of machine learning with a radar device where human events in an enclosed space are classified. SHIN discloses the determination of occupancy in a space to enable radar-based fall detection. Gillian discloses a neural network system where contextual information (such a occupancy, size of a room, type of room, etc.) are used to then configure and implement radar-based contextual sensing. The combination of SHIN and Gillian would be obvious with a reasonable expectation of success in order to improve performance of the radar device (see paragraph 0004 of Gillian).
Regarding claim 7, SHIN discloses [Note: what SHIN fails to disclose is strike-through]
The computer implemented method of claim 1,
Gillian discloses,
wherein the classification of the region comprises a geometric classification of the region (see paragraph 0136, “At 1408, a context for the space is determined based on at least the radar features. In some cases, the context for the device is determined based on geometries and occupancies derived from the radar features. For example, the context manager may determine a size of the space, number of other occupants, and distances to those occupants in order to set a privacy bubble around the device. In other cases, a set of landmarks in the radar features is compared to known 3D context models. This can be effective to identify the space in which the device is operating based on a known 3D context model.”).
It would have been obvious to someone with ordinary skill in the art prior to the
effective filing date of the claimed invention to incorporate the features as disclosed by Gillian into the invention of SHIN. Both references are considered analogous arts to the claimed invention as they both disclose the use of machine learning with a radar device where human events in an enclosed space are classified. SHIN discloses the determination of occupancy in a space to enable radar-based fall detection. Gillian discloses a neural network system where contextual information (such a occupancy, size of a room, type of room, etc.) are used to then configure and implement radar-based contextual sensing. The combination of SHIN and Gillian would be obvious with a reasonable expectation of success in order to improve performance of the radar device (see paragraph 0004 of Gillian).
Regarding claim 18, SHIN discloses [Note: what SHIN fails to disclose is strike-through]
The computer implemented method of claim 1,
Gillian discloses,
wherein the classification is selected from a group comprising:
a living room (see paragraph 0137, “Although described as known, the 3D context models may also be accessed or downloaded to the device, such as based on device location (e.g., GPS). Alternately or additionally, other types of sensor data can be compared with that of known 3D context models. For example, sounds and wireless networks detected by the device can be compared to acoustic and network data of the known 3D context models. Continuing the ongoing example, the context manager 112 of the tablet computer 102-4 determines a context of environment 1500 as “living room,” a private, semi-secure context.”); and
a non living-room (see paragraph 0128, “At 1310, a set of 3D landmarks of the space is generated based on the 3D radar features and the spatial orientation thereof. These landmarks may include identifiable physical characteristics of the space, such as furniture, basic shape and geometry of the space, reflectivity of surfaces, and the like. For example, 3D landmarks of a conference room may include a table having legs of a particular shape and an overhead projector mounted to a mast that protrudes from the ceiling.”).
It would have been obvious to someone with ordinary skill in the art prior to the
effective filing date of the claimed invention to incorporate the features as disclosed by Gillian into the invention of SHIN. Both references are considered analogous arts to the claimed invention as they both disclose the use of machine learning with a radar device where human events in an enclosed space are classified. SHIN discloses the determination of occupancy in a space to enable radar-based fall detection. Gillian discloses a neural network system where contextual information (such a occupancy, size of a room, type of room, etc.) are used to then configure and implement radar-based contextual sensing. The combination of SHIN and Gillian would be obvious with a reasonable expectation of success in order to improve performance of the radar device (see paragraph 0004 of Gillian).
Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over SHIN et al. (US 20230018686 A1) in view of Gillian et al. (US 20170097413 A1) further in view of McQueen et al. (US 20200301378 A1), hereinafter McQueen.
Regarding claim 19, the combination of SHIN and Gillian discloses [Note: what SHIN fails to disclose is strike-through]
The computer implemented method of claim 18,
Gillian discloses [Note: what Gillian fails to disclose is strike-through],
wherein each classification in the group corresponds to a respective room size (see paragraph 0084, “In this particular example, the context manager 112 includes context models 536, device contexts 538, and context settings 540. The context models 536 include physical models of various spaces, such as dimensions, geometry, or features of a particular room. In other words, a context model can be considered to describe the unique character of particular space, like a 3D fingerprint. In some cases, building the context models 536 is implemented via machine learning techniques and may be performed passively as a device enters or passes through a particular space. Device contexts 538 include and may describe multiple contexts in which the computing device 102 may operate. These contexts may include a standard set of work contexts, such as “meeting,” “do not disturb,” “available,” “secure,” “private,” and the like. For example, the “meeting” context may be associated with the device being in a conference room, with multiple other coworkers and customers.”, further see paragraphs 0128 and 0137),
It would have been obvious to someone with ordinary skill in the art prior to the
effective filing date of the claimed invention to incorporate the features as disclosed by Gillian into the invention of SHIN. Both references are considered analogous arts to the claimed invention as they both disclose the use of machine learning with a radar device where human events in an enclosed space are classified. SHIN discloses the determination of occupancy in a space to enable radar-based fall detection. Gillian discloses a neural network system where contextual information (such a occupancy, size of a room, type of room, etc.) are used to then configure and implement radar-based contextual sensing. The combination of SHIN and Gillian would be obvious with a reasonable expectation of success in order to improve performance of the radar device (see paragraph 0004 of Gillian).
McQueen discloses,
wherein the living room classification corresponds to a room size that is greater than a room size corresponding to the non living-room classification (see paragraph 0152, “Dimension data 2130 can be used to deduce a type of room in a floor plan, according to certain embodiments. Dimension data 2130 can be determine by the system using the floor plan generation techniques described above. Alternatively or additionally, dimension data may be provided by a user or other resource (e.g., from city planning website with blueprints of the building). Dimension data 2130 can include the location of one or more walls within the rooms of the building, the dimensions of the walls (e.g., height, width, thickness), the location and/or dimensions of other objects in the building (e.g., sofas, tables, etc.), or the like. In some cases, the dimensions of the rooms (which may be determined based on the location/dimensions of the walls) can inform a type of room in a number of ways. For example, very small rooms may be more likely to be bathroom or closet, and comparatively large rooms may be more likely to be a living room or dining room.”).
It would have been obvious to someone with ordinary skill in the art prior to the
effective filing date of the claimed invention to incorporate the features as disclosed by McQueen into the invention of SHIN in view of Gillian. All three references are considered analogous arts to the claimed invention as they all disclose the use of radar sensors within an enclosed environment of object detection and tracking. The combination of SHIN and Gillian and McQueen would be obvious with a reasonable expectation of success in order to differentiate between large rooms and small rooms for more efficient object tracking.
Claim(s) 8-10,13 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over SHIN et al. (US 20230018686 A1) in view of Guttmann et al. (US 20180150698 A1), hereinafter Guttmann.
Regarding claim 8, SHIN discloses [Note: what SHIN fails to disclose is strike-through]
The computer implemented method of claim 1, trained classifier model of the plurality of trained classifier models, and the selected trained classifier model is used to determine whether the person is in a fall state (see paragraph 0046, “The machine learning model may be trained prior to being installed on smart-home device 201 such that the pre-trained machine learning model can be used on a large number of smart-home devices being manufactured. Therefore, once installed on processing module 210, the machine learning model of fall feature vector detection engine 214 may be static. Separate machine learning models may be used depending on the type of location where the smart home device is to be placed. For instance, different machine learning models, that are trained separately, use different weightings, and/or different types of machine learning (e.g., a neural network) may be used based on the type of installation location, such as a wall or ceiling.”).
Guttmann discloses,
wherein a plurality of trained classifier models are accessible to the classifier, the configuring the classifier comprises selecting a trained classifier model of the plurality of trained classifier models (see paragraph 0100, “In some embodiments, obtaining inference models (Step 730) may comprise selecting an inference model of a plurality of alternative inference models. For example, the plurality of alternative inference models may be stored in memory (such as memory units 210, shared memory modules 410, etc.), and the selection of the inference model may be based, at least in part, on available information, such as the scene information. In some embodiments, obtaining inference models (Step 730) may comprise selecting one or more training examples, and training a machine learning algorithm and/or a deep learning algorithm using the selected training examples.”, further see paragraph 0102, “In some embodiments, at least part of the inference model obtained by Step 730 may comprise one or more artificial neural networks. In some embodiments, obtaining inference models (Step 730) may comprise generating one or more artificial neural network models, for example by selecting one or more parameters of an artificial neural network model, by selecting a portion of an artificial neural network model, by selecting one or more artificial neural network model of a plurality of alternative artificial neural network models, by training an artificial neural network model on training examples, and so forth”),
It would have been obvious to someone with ordinary skill in the art prior to the
effective filing date of the claimed invention to incorporate the features as disclosed by Guttmann into the invention of SHIN. Both references are considered analogous arts to the claimed invention as they both disclose the use of radar sensors within an enclosed environment of object detection using machine learning. The combination of SHIN and Guttmann would be obvious with a reasonable expectation of success in order to create a more adaptable device.
Regarding claim 9, SHIN further discloses
The computer implemented method of claim 8, the method further comprising:
determining one or more parameters associated with the measured wave reflection data (see Fig. 6, methods steps of center-of-mass tracking 645 and feature extraction at 650 which are based on the received radar data at step 620); and
supplying the determined parameters as inputs into the selected trained classifier model to determine whether the person is in a fall state (see Fig. 6, step 655 where the features 650 are supplied as inputs to the pre-trained machine learned model at step 655)).
Regarding claim 10, SHIN further discloses
The computer implemented method of claim 9, wherein the determined parameters comprise features extracted from the measured wave reflection data and do not comprise the wave reflection data itself (see Fig. 6, feature extraction 650, further see paragraph 0088, “At block 650, a feature extraction process may be performed. The feature extraction process may identify some number of features of the tracklet determined at block 645. For instance, as previously detailed 10 or more features may be determined at block 650. In some embodiments, 16 features are determined. At block 655, a pre-trained machine learning model may receive the features identified at block 650. The pre-trained machine learning model may be a random forest machine learning model or some other form of machine learning model, such as a neural network. Based on the features provided to the pre-trained machine learning model, a classification may be performed that indicates whether the features represent a fall or not.”).
Regarding claim 13, SHIN further discloses
The computer implemented method of claim 8, wherein the classifier models are stored on non-transient memory of a device, wherein the device comprises the active reflected wave detector (see Fig. 2, where the classifier models are stored on the processing module of the device, further see paragraph 0046, “The machine learning model may be trained prior to being installed on smart-home device 201 such that the pre-trained machine learning model can be used on a large number of smart-home devices being manufactured. Therefore, once installed on processing module 210, the machine learning model of fall feature vector detection engine 214 may be static. Separate machine learning models may be used depending on the type of location where the smart home device is to be placed. For instance, different machine learning models, that are trained separately, use different weightings, and/or different types of machine learning (e.g., a neural network) may be used based on the type of installation location, such as a wall or ceiling.”).
Regarding claim 15, SHIN discloses [Note: what SHIN fails to disclose is strike-through]
The computer implemented method of claim 8,
Guttmann discloses,
wherein the models are deep learning models (see paragraph 0074, “In some embodiments, analyzing image data, for example by Step 620 and/or Step 660 and/or Step 720 and/or Step 750 and/or Step 920 and/or Step 930 and/or Step 1120, may comprise analyzing the image data and/or the preprocessed image data using rules, functions, procedures, artificial neural networks, object detection algorithms, face detection algorithms, visual event detection algorithms, action detection algorithms, motion detection algorithms, background subtraction algorithms, inference models, and so forth. Some examples of such inference models may include: an inference model preprogrammed manually; a classification model; a regression model; a result of training algorithms (such as machine learning algorithms and/or deep learning algorithms) on training examples, where the training examples may include examples of data instances, and in some cases, a data instance may be labeled with a corresponding desired label and/or result; and so forth.”).
It would have been obvious to someone with ordinary skill in the art prior to the
effective filing date of the claimed invention to incorporate the features as disclosed by Guttmann into the invention of SHIN. Both references are considered analogous arts to the claimed invention as they both disclose the use of radar sensors within an enclosed environment of object detection using machine learning. The combination of SHIN and Guttmann would be obvious with a reasonable expectation of success in order to create a more efficient and accurate device.
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over SHIN et al. (US 20230018686 A1) in view of Guttmann et al. (US 20200301378 A1) further in view of Seo et al. (US 20200349247 A1).
Regarding claim 14, the combination of SHIN and Guttmann disclose [Note: what the combination of SHIN and Guttmann fails to disclose is strike-through]
The computer implemented method of claim 13,
Seo discloses,
wherein storage of each classifier model respectively consumes no more than 500 kilobytes of memory (see paragraph 0072, “FIG. 9 illustrates a micrograph of a 65 nm prototype chip for the smart hardware security engine 10 of FIG. 1. The prototype chip was implemented in 65 nm LP CMOS. The total on-chip memory is 64 kB, where 52 kB is used for neural network weights.”).
It would have been obvious to someone with ordinary skill in the art prior to the
effective filing date of the claimed invention to incorporate the features as disclosed by Seo into the invention of SHIN in view of Guttmann. All three references are considered analogous arts to the claimed invention as they all disclose the use of machine learning for object tracking. The combination would be obvious with a reasonable expectation of success in order to create a trade-off to reduce memory usage by the system.
Claim(s) 16 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over SHIN et al. (US 20230018686 A1) in view of Guttmann et al. (US 20200301378 A1) further in view of Dangi (US 5491776 A).
Regarding claim 16, the combination of SHIN and Guttmann disclose [Note: what the combination of SHIN and Guttmann fails to disclose is strike-through]
The computer implemented method of claim 15,
Dangi discloses,
wherein each classifier model comprises an input and output and no more than 4 condensed layers (see Col. 8, line 58 - Col. 9, line 3, “The signal processing apparatus of the present embodiment includes a feed forward neural network NN of the three-layer structure including an input layer I1 constituted from 64 input interface units (neurons) equal to the number of picture elements included in the one block 12, an intermediate layer H1 constituted from a number of neurons independent of the number of the neurons included in the input layer I1, and an output layer O1 constituted from a number of neurons equal to the number (64) of the neurons included in the input layer I1. It is to be noted that, since, in the present embodiment, the number of DCT coefficients obtained as outputs is 64, also the output layer O1 is constituted from 64 neurons.”).
It would have been obvious to someone with ordinary skill in the art prior to the
effective filing date of the claimed invention to incorporate the features as disclosed by Dangi into the invention of SHIN in view of Guttmann. All three references are considered analogous arts to the claimed invention as they all disclose the use of machine learning for object tracking. The combination would be obvious with a reasonable expectation of success in order to create a trade-off to reduce memory usage by the system.
Regarding claim 17, the combination of SHIN and Guttmann disclose [Note: what the combination of SHIN and Guttmann fails to disclose is strike-through]
The computer implemented method of claim 16,
Dangi discloses,
wherein each condensed layer consists of no more than 64 neurons (see Col. 8, line 58 - Col. 9, line 3, “The signal processing apparatus of the present embodiment includes a feed forward neural network NN of the three-layer structure including an input layer I1 constituted from 64 input interface units (neurons) equal to the number of picture elements included in the one block 12, an intermediate layer H1 constituted from a number of neurons independent of the number of the neurons included in the input layer I1, and an output layer O1 constituted from a number of neurons equal to the number (64) of the neurons included in the input layer I1. It is to be noted that, since, in the present embodiment, the number of DCT coefficients obtained as outputs is 64, also the output layer O1 is constituted from 64 neurons.”).
It would have been obvious to someone with ordinary skill in the art prior to the
effective filing date of the claimed invention to incorporate the features as disclosed by Dangi into the invention of SHIN in view of Guttmann. All three references are considered analogous arts to the claimed invention as they all disclose the use of machine learning for object tracking. The combination would be obvious with a reasonable expectation of success in order to create a trade-off to reduce memory usage by the system.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Amihood et al. (US 20210396867 A1) discloses a radar system which uses a context-based neural network for classification.
LAGACE et al. (US 20210321222 A1) discloses radar-based fall detection where the number of units utilized by the device are determined by the size of the room (see paragraph 0232)
LIN et al. (US 20200166610 A1) is considered close pertinent art to the claimed invention as it discloses human fall detection in small rooms such as bathrooms.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAZRA N. WAHEED whose telephone number is (571)272-6713. The examiner can normally be reached M-F (8 AM - 4:30 PM).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vladimir Magloire can be reached at (571)270-5144. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NAZRA NUR WAHEED/Examiner, Art Unit 3648